Hybrid Arima - LSTM code - Google Trends

The hybrid ARIMA-LSTM model is open to a variety of experimentation. For ideal performance, a balance must be reached between the levels of volatility that work best for the ARIMA and LSTM models. Using shorter MA periods that result in a non-mesokurtic distribution may achieve a better volatility balance between models.

Import Libraries

In [1]:
import pandas as pd
pd.set_option('display.max_rows', 500)
import timeit
In [2]:
!pip install -q -U keras-tuner
     |████████████████████████████████| 98 kB 3.9 MB/s 
In [3]:
import keras_tuner as kt
In [4]:
!pip install pmdarima
Collecting pmdarima
  Downloading pmdarima-1.8.4-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl (1.4 MB)
     |████████████████████████████████| 1.4 MB 7.2 MB/s 
Requirement already satisfied: numpy>=1.19.3 in /usr/local/lib/python3.7/dist-packages (from pmdarima) (1.19.5)
Collecting statsmodels!=0.12.0,>=0.11
  Downloading statsmodels-0.13.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (9.8 MB)
     |████████████████████████████████| 9.8 MB 54.3 MB/s 
Requirement already satisfied: pandas>=0.19 in /usr/local/lib/python3.7/dist-packages (from pmdarima) (1.1.5)
Requirement already satisfied: setuptools!=50.0.0,>=38.6.0 in /usr/local/lib/python3.7/dist-packages (from pmdarima) (57.4.0)
Requirement already satisfied: Cython!=0.29.18,>=0.29 in /usr/local/lib/python3.7/dist-packages (from pmdarima) (0.29.24)
Requirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.7/dist-packages (from pmdarima) (1.1.0)
Requirement already satisfied: urllib3 in /usr/local/lib/python3.7/dist-packages (from pmdarima) (1.24.3)
Requirement already satisfied: scipy>=1.3.2 in /usr/local/lib/python3.7/dist-packages (from pmdarima) (1.4.1)
Requirement already satisfied: scikit-learn>=0.22 in /usr/local/lib/python3.7/dist-packages (from pmdarima) (1.0.1)
Requirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.7/dist-packages (from pandas>=0.19->pmdarima) (2018.9)
Requirement already satisfied: python-dateutil>=2.7.3 in /usr/local/lib/python3.7/dist-packages (from pandas>=0.19->pmdarima) (2.8.2)
Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.7/dist-packages (from python-dateutil>=2.7.3->pandas>=0.19->pmdarima) (1.15.0)
Requirement already satisfied: threadpoolctl>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from scikit-learn>=0.22->pmdarima) (3.0.0)
Requirement already satisfied: patsy>=0.5.2 in /usr/local/lib/python3.7/dist-packages (from statsmodels!=0.12.0,>=0.11->pmdarima) (0.5.2)
Installing collected packages: statsmodels, pmdarima
  Attempting uninstall: statsmodels
    Found existing installation: statsmodels 0.10.2
    Uninstalling statsmodels-0.10.2:
      Successfully uninstalled statsmodels-0.10.2
Successfully installed pmdarima-1.8.4 statsmodels-0.13.1
In [5]:
import pmdarima
In [6]:
url = 'https://launchpad.net/~mario-mariomedina/+archive/ubuntu/talib/+files'
!wget $url/libta-lib0_0.4.0-oneiric1_amd64.deb -qO libta.deb
!wget $url/ta-lib0-dev_0.4.0-oneiric1_amd64.deb -qO ta.deb
!dpkg -i libta.deb ta.deb
!pip install ta-lib
import talib
Selecting previously unselected package libta-lib0.
(Reading database ... 155222 files and directories currently installed.)
Preparing to unpack libta.deb ...
Unpacking libta-lib0 (0.4.0-oneiric1) ...
Selecting previously unselected package ta-lib0-dev.
Preparing to unpack ta.deb ...
Unpacking ta-lib0-dev (0.4.0-oneiric1) ...
Setting up libta-lib0 (0.4.0-oneiric1) ...
Setting up ta-lib0-dev (0.4.0-oneiric1) ...
Processing triggers for man-db (2.8.3-2ubuntu0.1) ...
Processing triggers for libc-bin (2.27-3ubuntu1.3) ...
/sbin/ldconfig.real: /usr/local/lib/python3.7/dist-packages/ideep4py/lib/libmkldnn.so.0 is not a symbolic link

Collecting ta-lib
  Downloading TA-Lib-0.4.22.tar.gz (268 kB)
     |████████████████████████████████| 268 kB 8.8 MB/s 
  Installing build dependencies ... done
  Getting requirements to build wheel ... done
    Preparing wheel metadata ... done
Requirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from ta-lib) (1.19.5)
Building wheels for collected packages: ta-lib
  Building wheel for ta-lib (PEP 517) ... done
  Created wheel for ta-lib: filename=TA_Lib-0.4.22-cp37-cp37m-linux_x86_64.whl size=1465648 sha256=6864bc894fca760ea7a1568a10d29f53fda00cad2672992e2a9901b7a32c8f74
  Stored in directory: /root/.cache/pip/wheels/7b/63/a9/144081748d9c4f0a09b4670c7b3c414bcb33ff97f0724c753a
Successfully built ta-lib
Installing collected packages: ta-lib
Successfully installed ta-lib-0.4.22
In [7]:
import tensorflow
import statsmodels.tsa.api
import keras
import sklearn
In [8]:
from tensorflow.keras.optimizers import Adam
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense, LSTM, Dropout, Bidirectional,BatchNormalization, Embedding, TimeDistributed, LeakyReLU, GRU
from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint, ReduceLROnPlateau
In [9]:
from keras.models import Sequential, load_model
from keras.layers import Dense, LSTM, Activation, Dropout
from keras import backend as K
from keras.utils.generic_utils import get_custom_objects
from keras.callbacks import ModelCheckpoint,EarlyStopping
from keras.regularizers import l1_l2
In [10]:
import math
In [11]:
from statsmodels.tsa.api import VAR
from statsmodels.tsa.statespace.varmax import VARMAX,VARMAXResults
In [12]:
from sklearn.metrics import mean_squared_error, mean_absolute_percentage_error, mean_absolute_error
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import train_test_split
In [13]:
from matplotlib import pyplot
In [14]:
import json
import datetime
import pandas as pd
import numpy as np
import os
from scipy.stats import kurtosis
import pmdarima as pm
from pmdarima import auto_arima
from talib import abstract
import json
import matplotlib.pyplot as plt
# plt.rcParams.update({'font.size': 16})
from matplotlib.pyplot import figure
from numpy import array
from numpy import hstack
from keras.models import Sequential
from keras.layers import LSTM
from keras.layers import Dense
from keras.layers import RepeatVector
from keras.layers import TimeDistributed
In [15]:
from keras.utils.generic_utils import get_custom_objects
from tensorflow.keras.utils import plot_model
In [16]:
import warnings
from statsmodels.tools.sm_exceptions import ConvergenceWarning
warnings.simplefilter('ignore', ConvergenceWarning)

Load Data

In [2]:
from google.colab import drive
drive.mount('/content/drive')
Mounted at /content/drive
In [ ]:
cd drive/MyDrive/Stock price prediction/
In [18]:
cd drive/MyDrive/Stock price prediction/Generated datasets
/content/drive/.shortcut-targets-by-id/1IaGjVBlTspxI2CHSrxfYnaiYvsaG0pHs/Stock price prediction/Generated datasets
In [19]:
df = pd.read_csv("FULL_Data_google_COVID_bull_bear.csv",parse_dates=[0])
df.tail(10)
Out[19]:
Unnamed: 0 Unnamed: 0.1 Unnamed: 0.1.1 Unnamed: 0.1.1.1 Open High Low Close Adj Close Volume MA7 MA21 MACD 20SD upper_band lower_band EMA logmomentum absolute of 3 comp angle of 3 comp absolute of 6 comp angle of 6 comp absolute of 9 comp angle of 9 comp Date search COVID positiveIncrease COVID deathIncrease bull score bear score fourier bull 10 fourier bull 30 fourier bear 10 fourier bear 30
1592 1592 1781 1781 1781 150.199997 151.429993 150.059998 150.809998 150.809998 56787900.0 150.565717 148.423811 -1.137777 2.817933 154.059677 142.787944 150.767809 5.009368 93.428749 -0.061228 100.779503 -0.039111 103.599003 -0.022436 2021-11-09 19 112313 1258 0.119141 0.111328 NaN NaN NaN NaN
1593 1593 1782 1782 1782 150.020004 150.130005 147.850006 147.919998 147.919998 65187100.0 150.417145 148.729049 -1.236913 2.144358 153.017766 144.440332 148.869268 4.989888 92.922909 -0.061683 99.694365 -0.039762 101.872301 -0.022657 2021-11-10 19 80301 1470 0.154297 0.109375 NaN NaN NaN NaN
1594 1594 1783 1783 1783 148.960007 149.429993 147.679993 147.869995 147.869995 41000000.0 150.110001 149.060477 -1.165047 1.767475 152.595428 145.525526 148.203086 4.989548 92.416471 -0.062129 98.604584 -0.040391 100.137594 -0.022839 2021-11-11 19 94975 1662 0.102845 0.126915 NaN NaN NaN NaN
1595 1595 1784 1784 1784 148.429993 150.399994 147.479996 149.990005 149.990005 63632600.0 149.895715 149.357144 -0.869308 1.420732 152.198608 146.515681 149.394365 5.003879 91.909483 -0.062566 97.510555 -0.040998 98.396260 -0.022980 2021-11-12 19 55499 797 0.157277 0.080595 NaN NaN NaN NaN
1596 1596 1785 1785 1785 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 2021-11-13 19 146529 2505 0.139459 0.083243 NaN NaN NaN NaN
1597 1597 1786 1786 1786 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 2021-11-14 19 40964 479 0.151261 0.100840 NaN NaN NaN NaN
1598 1598 1787 1787 1787 150.369995 151.880005 149.429993 150.000000 150.000000 59222800.0 149.758571 149.602859 -0.907641 1.229694 152.062246 147.143471 149.798122 5.003946 91.401994 -0.062993 96.412672 -0.041581 96.649685 -0.023077 2021-11-15 22 30290 148 0.136737 0.109389 NaN NaN NaN NaN
1599 1599 1788 1788 1788 149.940002 151.490005 149.339996 151.000000 151.000000 59256200.0 149.718571 149.814763 -0.791320 1.236243 152.287250 147.342277 150.599374 5.010635 90.894052 -0.063410 95.311334 -0.042140 94.899260 -0.023130 2021-11-16 22 138962 1294 0.135531 0.115385 NaN NaN NaN NaN
1600 1600 1789 1789 1789 151.000000 155.000000 150.990005 153.490005 153.490005 88807000.0 150.154286 150.040002 -0.657719 1.467121 152.974245 147.105759 152.526461 5.027099 90.385704 -0.063817 94.206941 -0.042673 93.146378 -0.023135 2021-11-17 22 87626 1290 0.100870 0.126957 NaN NaN NaN NaN
1601 1601 1790 1790 1790 153.710007 158.669998 153.050003 157.869995 157.869995 137659100.0 151.162857 150.450002 -0.609656 2.267825 154.985653 145.914351 156.088817 5.055417 89.877000 -0.064214 93.099895 -0.043179 91.392433 -0.023090 2021-11-18 22 111404 1637 0.145098 0.121569 NaN NaN NaN NaN
In [ ]:
cd ..
In [24]:
cd Archana - LSTM Hybrid/Outputs/Gtrends
/content/drive/.shortcut-targets-by-id/1IaGjVBlTspxI2CHSrxfYnaiYvsaG0pHs/Stock price prediction/Archana - LSTM Hybrid/Outputs/Gtrends
In [25]:
pd.to_datetime(df[np.isnan(df.Close)==True]['Date']).dt.day_name().head(5)
Out[25]:
0    Saturday
1      Sunday
3     Tuesday
7    Saturday
8      Sunday
Name: Date, dtype: object
In [26]:
len(pd.to_datetime(df[np.isnan(df.Close)==True]['Date']).dt.day_name())
Out[26]:
497
In [27]:
len(df)
Out[27]:
1602
In [28]:
len(df) - len(pd.to_datetime(df[np.isnan(df.Close)==True]['Date']).dt.day_name())
Out[28]:
1105
In [29]:
df.dropna(inplace=True)
len(df)
Out[29]:
1080
In [30]:
pd.to_datetime(df[np.isnan(df.Close)==True]['Date']).dt.day_name()
Out[30]:
Series([], Name: Date, dtype: object)
In [31]:
df.head(5)
Out[31]:
Unnamed: 0 Unnamed: 0.1 Unnamed: 0.1.1 Unnamed: 0.1.1.1 Open High Low Close Adj Close Volume MA7 MA21 MACD 20SD upper_band lower_band EMA logmomentum absolute of 3 comp angle of 3 comp absolute of 6 comp angle of 6 comp absolute of 9 comp angle of 9 comp Date search COVID positiveIncrease COVID deathIncrease bull score bear score fourier bull 10 fourier bull 30 fourier bear 10 fourier bear 30
2 2 191 191 191 36.220001 36.325001 35.775002 35.875000 34.054882 57111200.0 36.173571 36.751904 0.303356 0.960520 38.672945 34.830864 35.924548 3.551770 38.458011 0.046984 29.704545 0.102857 43.304973 -0.053955 2017-07-03 15 0 0 0.666667 0.000000 0.142778 0.146810 0.100537 0.099251
4 4 193 193 193 35.922501 36.197498 35.680000 36.022499 34.194897 86278400.0 36.095357 36.634762 0.328795 0.852735 38.340231 34.929292 35.989849 3.555991 38.240991 0.049445 29.954520 0.099254 43.438321 -0.053936 2017-07-05 15 0 0 0.400000 0.000000 0.144487 0.145833 0.100630 0.096361
5 5 194 194 194 35.755001 35.875000 35.602501 35.682499 33.872143 96515200.0 35.984999 36.495238 0.346702 0.677629 37.850495 35.139980 35.784949 3.546235 38.027974 0.051918 30.209839 0.095602 43.557403 -0.053820 2017-07-06 15 0 0 0.142857 0.142857 0.145346 0.145164 0.100672 0.094761
6 6 195 195 195 35.724998 36.187500 35.724998 36.044998 34.216255 76806800.0 36.001071 36.362023 0.387422 0.387634 37.137291 35.586756 35.958315 3.556633 37.818962 0.054401 30.470232 0.091907 43.662260 -0.053608 2017-07-07 15 0 0 0.333333 0.000000 0.146208 0.144377 0.100711 0.093072
9 9 198 198 198 36.027500 36.487499 35.842499 36.264999 34.425095 84362400.0 35.973571 36.243809 0.388315 0.308042 36.859893 35.627725 36.162771 3.562891 37.613953 0.056893 30.735430 0.088177 43.752965 -0.053302 2017-07-10 14 0 0 0.000000 0.000000 0.148802 0.141354 0.100808 0.087587
In [32]:
stock_col= list(df.columns)
stock_col = stock_col[4:len(stock_col)]
In [33]:
dataset_final = df[stock_col]
In [34]:
dataset_final.head(5)
Out[34]:
Open High Low Close Adj Close Volume MA7 MA21 MACD 20SD upper_band lower_band EMA logmomentum absolute of 3 comp angle of 3 comp absolute of 6 comp angle of 6 comp absolute of 9 comp angle of 9 comp Date search COVID positiveIncrease COVID deathIncrease bull score bear score fourier bull 10 fourier bull 30 fourier bear 10 fourier bear 30
2 36.220001 36.325001 35.775002 35.875000 34.054882 57111200.0 36.173571 36.751904 0.303356 0.960520 38.672945 34.830864 35.924548 3.551770 38.458011 0.046984 29.704545 0.102857 43.304973 -0.053955 2017-07-03 15 0 0 0.666667 0.000000 0.142778 0.146810 0.100537 0.099251
4 35.922501 36.197498 35.680000 36.022499 34.194897 86278400.0 36.095357 36.634762 0.328795 0.852735 38.340231 34.929292 35.989849 3.555991 38.240991 0.049445 29.954520 0.099254 43.438321 -0.053936 2017-07-05 15 0 0 0.400000 0.000000 0.144487 0.145833 0.100630 0.096361
5 35.755001 35.875000 35.602501 35.682499 33.872143 96515200.0 35.984999 36.495238 0.346702 0.677629 37.850495 35.139980 35.784949 3.546235 38.027974 0.051918 30.209839 0.095602 43.557403 -0.053820 2017-07-06 15 0 0 0.142857 0.142857 0.145346 0.145164 0.100672 0.094761
6 35.724998 36.187500 35.724998 36.044998 34.216255 76806800.0 36.001071 36.362023 0.387422 0.387634 37.137291 35.586756 35.958315 3.556633 37.818962 0.054401 30.470232 0.091907 43.662260 -0.053608 2017-07-07 15 0 0 0.333333 0.000000 0.146208 0.144377 0.100711 0.093072
9 36.027500 36.487499 35.842499 36.264999 34.425095 84362400.0 35.973571 36.243809 0.388315 0.308042 36.859893 35.627725 36.162771 3.562891 37.613953 0.056893 30.735430 0.088177 43.752965 -0.053302 2017-07-10 14 0 0 0.000000 0.000000 0.148802 0.141354 0.100808 0.087587
In [35]:
stock_col= list(df.columns)
stock_col = stock_col[4:len(stock_col)-8]
dataset_final = df[stock_col]
dataset_final.head(5)
Out[35]:
Open High Low Close Adj Close Volume MA7 MA21 MACD 20SD upper_band lower_band EMA logmomentum absolute of 3 comp angle of 3 comp absolute of 6 comp angle of 6 comp absolute of 9 comp angle of 9 comp Date search
2 36.220001 36.325001 35.775002 35.875000 34.054882 57111200.0 36.173571 36.751904 0.303356 0.960520 38.672945 34.830864 35.924548 3.551770 38.458011 0.046984 29.704545 0.102857 43.304973 -0.053955 2017-07-03 15
4 35.922501 36.197498 35.680000 36.022499 34.194897 86278400.0 36.095357 36.634762 0.328795 0.852735 38.340231 34.929292 35.989849 3.555991 38.240991 0.049445 29.954520 0.099254 43.438321 -0.053936 2017-07-05 15
5 35.755001 35.875000 35.602501 35.682499 33.872143 96515200.0 35.984999 36.495238 0.346702 0.677629 37.850495 35.139980 35.784949 3.546235 38.027974 0.051918 30.209839 0.095602 43.557403 -0.053820 2017-07-06 15
6 35.724998 36.187500 35.724998 36.044998 34.216255 76806800.0 36.001071 36.362023 0.387422 0.387634 37.137291 35.586756 35.958315 3.556633 37.818962 0.054401 30.470232 0.091907 43.662260 -0.053608 2017-07-07 15
9 36.027500 36.487499 35.842499 36.264999 34.425095 84362400.0 35.973571 36.243809 0.388315 0.308042 36.859893 35.627725 36.162771 3.562891 37.613953 0.056893 30.735430 0.088177 43.752965 -0.053302 2017-07-10 14
In [36]:
# Set the date to datetime data
datetime_series = pd.to_datetime(dataset_final['Date'])
datetime_index = pd.DatetimeIndex(datetime_series.values)
dataset_final = dataset_final.set_index(datetime_index)
dataset_final = dataset_final.sort_values(by='Date')
dataset_final = dataset_final.drop(columns='Date')
dataset_final.head(5)
Out[36]:
Open High Low Close Adj Close Volume MA7 MA21 MACD 20SD upper_band lower_band EMA logmomentum absolute of 3 comp angle of 3 comp absolute of 6 comp angle of 6 comp absolute of 9 comp angle of 9 comp search
2017-07-03 36.220001 36.325001 35.775002 35.875000 34.054882 57111200.0 36.173571 36.751904 0.303356 0.960520 38.672945 34.830864 35.924548 3.551770 38.458011 0.046984 29.704545 0.102857 43.304973 -0.053955 15
2017-07-05 35.922501 36.197498 35.680000 36.022499 34.194897 86278400.0 36.095357 36.634762 0.328795 0.852735 38.340231 34.929292 35.989849 3.555991 38.240991 0.049445 29.954520 0.099254 43.438321 -0.053936 15
2017-07-06 35.755001 35.875000 35.602501 35.682499 33.872143 96515200.0 35.984999 36.495238 0.346702 0.677629 37.850495 35.139980 35.784949 3.546235 38.027974 0.051918 30.209839 0.095602 43.557403 -0.053820 15
2017-07-07 35.724998 36.187500 35.724998 36.044998 34.216255 76806800.0 36.001071 36.362023 0.387422 0.387634 37.137291 35.586756 35.958315 3.556633 37.818962 0.054401 30.470232 0.091907 43.662260 -0.053608 15
2017-07-10 36.027500 36.487499 35.842499 36.264999 34.425095 84362400.0 35.973571 36.243809 0.388315 0.308042 36.859893 35.627725 36.162771 3.562891 37.613953 0.056893 30.735430 0.088177 43.752965 -0.053302 14

Train & test Dataset for Multistep Process

In [37]:
# Get features and target
X_value = pd.DataFrame(dataset_final.iloc[:, :])
y_value = pd.DataFrame(dataset_final.iloc[:, 3])
In [38]:
y_value.head(5)
Out[38]:
Close
2017-07-03 35.875000
2017-07-05 36.022499
2017-07-06 35.682499
2017-07-07 36.044998
2017-07-10 36.264999
In [39]:
# Normalized the data
X_scaler = MinMaxScaler(feature_range=(-1, 1))
y_scaler = MinMaxScaler(feature_range=(-1, 1))
X_scaler.fit(X_value)
y_scaler.fit(y_value)
Out[39]:
MinMaxScaler(feature_range=(-1, 1))
In [40]:
X_scale_dataset = X_scaler.fit_transform(X_value)
y_scale_dataset = y_scaler.fit_transform(y_value)
In [41]:
X_scale_dataset.shape, y_scale_dataset.shape,
Out[41]:
((1080, 21), (1080, 1))
In [42]:
X_value.shape[1]
Out[42]:
21

N Steps Definition

In [43]:
n_steps_in = 3
n_features = X_value.shape[1] #19 features
n_steps_out = 1
In [44]:
# Reshape the data
'''Set the data input steps and output steps, 
    we use 30 days data to predict 1 day price here, 
    reshape it to (None, input_step, number of features) used for LSTM input'''
# Get X/y dataset
def get_X_y(X_data, y_data):
    X = list()
    y = list()
    yc = list()

    length = len(X_data)
    for i in range(0, length, 1):
        # pdb.set_trace()
        X_value = X_data[i: i + n_steps_in][:, :]
        # print('[',i,': ',i,' + ',n_steps_in,'][:, :]')
        y_value = y_data[i + n_steps_in: i + (n_steps_in + n_steps_out)][:, 0]
        # print('[',i,' + ',n_steps_in,': ',i,' + (',n_steps_in,' + ',n_steps_out,')][:, 0]')
        yc_value = y_data[i: i + n_steps_in][:, :]
        if len(X_value) == 3 and len(y_value) == 1:
            X.append(X_value)
            y.append(y_value)
            yc.append(yc_value)

    return np.array(X), np.array(y), np.array(yc)
In [45]:
# get the train test predict index
def predict_index(dataset, X_train, n_steps_in, n_steps_out):

    # get the predict data (remove the in_steps days)
    train_predict_index = dataset.iloc[n_steps_in : X_train.shape[0] + n_steps_in + n_steps_out - 1, :].index
    test_predict_index = dataset.iloc[X_train.shape[0] + n_steps_in:, :].index

    return train_predict_index, test_predict_index
In [46]:
def mean_absolute_percentage_error(actual, prediction):
    actual = pd.Series(actual)
    prediction = pd.Series(prediction)
    return 100 * np.mean(np.abs((actual - prediction))/actual)
In [47]:
# Split train/test dataset
def split_train_test(data):
    train_size = round(len(X) * 0.75)
    data_train = data[0:train_size]
    data_test = data[train_size:]
    return data_train, data_test
In [48]:
# Get data and check shape
X, y, yc = get_X_y(X_scale_dataset, y_scale_dataset)#X will be of shape 224 X 3 X 21 (each 3 X 21 array will be 3 days' worth of data). yc will have the corresponding closing price value
# pdb.set_trace()
X_train, X_test, = split_train_test(X)
y_train, y_test, = split_train_test(y)
yc_train, yc_test, = split_train_test(yc)
index_train, index_test, = predict_index(dataset_final, X_train, n_steps_in, n_steps_out)
In [49]:
# %% --------------------------------------- Save dataset -----------------------------------------------------------------
print('X shape: ', X.shape)
print('y shape: ', y.shape)
print('X_train shape: ', X_train.shape)
print('y_train shape: ', y_train.shape)
print('y_c_train shape: ', yc_train.shape)
print('X_test shape: ', X_test.shape)
print('y_test shape: ', y_test.shape)
print('y_c_test shape: ', yc_test.shape)
print('index_train shape:', index_train.shape)
print('index_test shape:', index_test.shape)
X shape:  (1077, 3, 21)
y shape:  (1077, 1)
X_train shape:  (808, 3, 21)
y_train shape:  (808, 1)
y_c_train shape:  (808, 3, 1)
X_test shape:  (269, 3, 21)
y_test shape:  (269, 1)
y_c_test shape:  (269, 3, 1)
index_train shape: (808,)
index_test shape: (269,)
In [50]:
output_dim = y_train.shape[1]
output_dim
Out[50]:
1
In [51]:
df = dataset_final.copy()
In [52]:
df.rename(columns={'Date':'date','Open':'open','Low':'low','Close':'close','Volume':'volume','High':'high'}, inplace = True)
df.reset_index(drop=True,inplace=True)
In [53]:
df.head(1)
Out[53]:
open high low close Adj Close volume MA7 MA21 MACD 20SD upper_band lower_band EMA logmomentum absolute of 3 comp angle of 3 comp absolute of 6 comp angle of 6 comp absolute of 9 comp angle of 9 comp search
0 36.220001 36.325001 35.775002 35.875 34.054882 57111200.0 36.173571 36.751904 0.303356 0.96052 38.672945 34.830864 35.924548 3.55177 38.458011 0.046984 29.704545 0.102857 43.304973 -0.053955 15
In [54]:
# df.drop(['volume', 'MACD','20SD','logmomentum','absolute of 3 comp','angle of 3 comp','absolute of 6 comp','angle of 6 comp','absolute of 9 comp','angle of 9 comp'], axis='columns', inplace=True) # only keep columns that can help as residuals in Arima Hybrid
In [55]:
df.head(1)
Out[55]:
open high low close Adj Close volume MA7 MA21 MACD 20SD upper_band lower_band EMA logmomentum absolute of 3 comp angle of 3 comp absolute of 6 comp angle of 6 comp absolute of 9 comp angle of 9 comp search
0 36.220001 36.325001 35.775002 35.875 34.054882 57111200.0 36.173571 36.751904 0.303356 0.96052 38.672945 34.830864 35.924548 3.55177 38.458011 0.046984 29.704545 0.102857 43.304973 -0.053955 15

Train & Test Length

In [56]:
test_len = len(X_test)
In [57]:
train_len = len(X_train )
In [58]:
test_len, train_len
Out[58]:
(269, 808)

Kurtosis Review

In [59]:
# Initialize moving averages from Ta-Lib, store functions in dictionary
# talib_moving_averages = ['SMA', 'EMA', 'WMA', 'DEMA', 'KAMA', 'MIDPOINT', 'MIDPRICE', 'T3', 'TEMA', 'TRIMA'] remove midprice due to outputbeing univariate
talib_moving_averages = ['SMA', 'EMA', 'WMA', 'DEMA', 'KAMA', 'MIDPOINT',  'T3', 'TEMA', 'TRIMA'] 
functions = {}
for ma in talib_moving_averages:
      functions[ma] = abstract.Function(ma)

    # Determine kurtosis "K" values for MA period 4-30
kurtosis_results = {'period': []}
for i in range(4, 100): # 100
  kurtosis_results['period'].append(i)
  for ma in talib_moving_averages:
              # Run moving average, remove last N days (used later for test data set), trim MA result to last 30 days
              ma_output = functions[ma](df[:-test_len], i).tail(14)
              # Determine kurtosis "K" value
              k = kurtosis(ma_output, fisher=False)
              # add to dictionary
              if ma not in kurtosis_results.keys():
                  kurtosis_results[ma] = []
              kurtosis_results[ma].append(k)

kurtosis_results = pd.DataFrame(kurtosis_results)
kurtosis_results.to_csv('kurtosis_results.csv')
In [60]:
kurtosis_results.head(5)
Out[60]:
period SMA EMA WMA DEMA KAMA MIDPOINT T3 TEMA TRIMA
0 4 2.272452 2.652772 2.896972 3.800351 2.299585 2.171369 1.978458 4.609342 2.411225
1 5 1.839451 2.355815 2.481058 3.327525 1.841282 1.826597 1.640277 4.262302 1.994382
2 6 1.583886 2.159532 2.194320 2.945924 1.536136 1.605787 1.510972 3.878845 1.679710
3 7 1.461290 2.026758 1.990629 2.651927 1.506197 1.558096 1.514015 3.510432 1.486348
4 8 1.447516 1.935302 1.853935 2.429648 1.509566 1.621595 1.601580 3.184123 1.373337

Optimized periods

In [63]:
# Determine period with K closest to 3 +/-5%
optimized_period = {}
# https://pypi.org/project/TA-Lib/ determines the type of moving average to use
# https://pandas.pydata.org/pandas-docs/version/0.17.0/generated/pandas.DataFrame.at.html#pandas.DataFrame.at
for ma in talib_moving_averages:
        difference = np.abs(kurtosis_results[ma] - 3)
        df_arimahyb = pd.DataFrame({'difference': difference, 'period': kurtosis_results['period']})
        df_arimahyb = df_arimahyb.sort_values(by=['difference'], ascending=True).reset_index(drop=True)
        if df_arimahyb.at[0, 'difference'] < 3 * 0.05:
            optimized_period[ma] = df_arimahyb.at[0, 'period']
        else:
            print(ma + ' is not viable, best K greater or less than 3 +/-5%')

print('\nOptimized periods:', optimized_period)
TRIMA is not viable, best K greater or less than 3 +/-5%

Optimized periods: {'SMA': 17, 'EMA': 51, 'WMA': 49, 'DEMA': 89, 'KAMA': 18, 'MIDPOINT': 14, 'T3': 19, 'TEMA': 9}
In [64]:
optimized_period
Out[64]:
{'DEMA': 89,
 'EMA': 51,
 'KAMA': 18,
 'MIDPOINT': 14,
 'SMA': 17,
 'T3': 19,
 'TEMA': 9,
 'WMA': 49}

Simulation Keys

In [ ]:
simulation = {}
for ma in optimized_period:
        print(ma)
        print(functions[ma])
        print ( int( optimized_period[ma]))
        # if ma in ['EMA','WMA','DEMA','KAMA','MIDPOINT']:
        #   print(ma)
        low_vol = df.apply(lambda c:  functions[ma](c, timeperiod = int( optimized_period[ma])))
        low_vol = low_vol.fillna(0)
        high_vol = pd.DataFrame()
        df2 = df.copy()
        for i in df2.columns:
          if i in low_vol.columns:
            high_vol[i] = df2[i].subtract(low_vol[i], fill_value=0)
SMA
SMA([input_arrays], [timeperiod=30])

Simple Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
17
EMA
EMA([input_arrays], [timeperiod=30])

Exponential Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
51
WMA
WMA([input_arrays], [timeperiod=30])

Weighted Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
49
DEMA
DEMA([input_arrays], [timeperiod=30])

Double Exponential Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
89
KAMA
KAMA([input_arrays], [timeperiod=30])

Kaufman Adaptive Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
18
MIDPOINT
MIDPOINT([input_arrays], [timeperiod=14])

MidPoint over period (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 14
Outputs:
    real
14
T3
T3([input_arrays], [timeperiod=5], [vfactor=0.7])

Triple Exponential Moving Average (T3) (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 5
    vfactor: 0.7
Outputs:
    real
19
TEMA
TEMA([input_arrays], [timeperiod=30])

Triple Exponential Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
9
In [ ]:
low_vol.tail(20)
Out[ ]:
open high low close Adj Close volume MA7 MA21 MACD 20SD upper_band lower_band EMA logmomentum absolute of 3 comp angle of 3 comp absolute of 6 comp angle of 6 comp absolute of 9 comp angle of 9 comp search
1060 140.200839 141.942909 138.524500 140.171495 139.966842 8.852448e+07 142.165478 146.699207 1.815578 4.572948 155.845103 137.553312 140.365562 4.935800 105.739092 -0.047411 125.318767 -0.018291 140.471430 -0.008749 19.573385
1061 139.425914 141.705469 138.035200 140.698014 140.492650 8.620711e+07 141.528981 145.978836 2.115887 4.189393 154.357621 137.600050 140.587196 4.939545 105.263514 -0.048037 124.464999 -0.019222 139.335869 -0.009472 19.632022
1062 140.773058 142.636405 139.932338 141.733666 141.526843 7.421445e+07 141.294887 145.298477 2.211018 3.647690 152.593858 138.003097 141.351509 4.946870 104.786174 -0.048658 123.598217 -0.020150 138.164839 -0.010188 19.698090
1063 142.179695 143.266994 141.127848 142.249061 142.041527 6.519616e+07 141.224295 144.665584 2.093072 3.241276 151.148137 138.183031 141.949877 4.950518 104.307114 -0.049275 122.718682 -0.021074 136.959041 -0.010898 19.763505
1064 142.253947 144.008334 141.546689 142.555532 142.347589 6.254214e+07 141.336839 144.184381 1.988881 2.884864 149.954110 138.414652 142.353647 4.952685 103.826381 -0.049886 121.826667 -0.021994 135.719217 -0.011600 20.311676
1065 142.782738 143.732491 141.438660 142.125353 141.918068 6.542511e+07 141.385297 143.758659 1.774804 2.626682 149.012024 138.505294 142.201451 4.949632 103.344020 -0.050491 120.922446 -0.022909 134.446150 -0.012293 20.671514
1066 142.153085 142.656915 140.466684 141.564232 141.357788 7.040262e+07 141.585336 143.387397 1.634667 2.376817 148.141030 138.633764 141.776638 4.945637 102.860075 -0.051092 120.006305 -0.023818 133.140665 -0.012977 20.900131
1067 142.177201 143.194327 140.977156 142.610382 142.402435 6.948112e+07 141.933749 143.094536 1.573317 2.074153 147.242842 138.946230 142.332468 4.953023 102.374593 -0.051687 119.078535 -0.024722 131.803627 -0.013650 21.038585
1068 143.009006 144.052615 142.286776 143.812497 143.602819 6.805244e+07 142.378675 142.879716 1.473333 1.874158 146.628032 139.131400 143.319154 4.961467 101.887619 -0.052275 118.139433 -0.025618 130.435938 -0.014311 21.116168
1069 143.380322 145.547752 142.940349 145.397429 145.185452 7.592729e+07 142.902069 142.813890 1.447641 1.844159 146.502207 139.125573 144.704671 4.972505 101.399198 -0.052858 117.189304 -0.026508 129.038540 -0.014959 21.153587
1070 145.337970 147.615882 144.980528 147.444584 147.229635 7.653090e+07 143.644287 142.961273 1.284466 2.010227 146.981728 138.940819 146.531280 4.986604 100.909377 -0.053435 116.228458 -0.027389 127.612408 -0.015592 21.165321
1071 147.375283 149.163050 146.995423 148.921380 148.704294 6.811986e+07 144.553694 143.236380 0.961952 2.270386 147.777152 138.695607 148.124680 4.996737 100.418203 -0.054006 115.257214 -0.028261 126.158555 -0.016211 21.161363
1072 148.656821 150.010875 148.071943 149.870634 149.652170 6.425222e+07 145.660163 143.530869 0.589081 2.556352 148.643574 138.418164 149.288649 5.003230 99.925720 -0.054570 114.275894 -0.029124 124.678027 -0.016812 21.148490
1073 149.806550 150.715254 149.026204 149.977942 149.759331 6.069918e+07 146.862121 143.785380 0.135134 2.805932 149.397244 138.173516 149.748178 5.003989 99.431976 -0.055128 113.284828 -0.029977 123.171903 -0.017396 21.131204
1074 149.937482 150.666013 149.022091 149.911667 149.693162 5.465321e+07 147.905162 144.001463 -0.245163 3.045742 150.092948 137.909978 149.857170 5.003545 98.937018 -0.055679 112.284350 -0.030820 121.641290 -0.017961 25.016406
1075 150.228161 151.254072 149.586503 150.104281 149.885502 5.602702e+07 148.803988 144.237215 -0.571069 3.270011 150.777237 137.697192 150.021910 5.004835 98.440892 -0.056223 111.274800 -0.031650 120.087330 -0.018506 27.455491
1076 150.328251 150.997797 149.591175 149.912656 149.694163 5.484778e+07 149.449021 144.548659 -0.850904 3.458615 151.465890 137.631428 149.949074 5.003520 97.943645 -0.056759 110.256524 -0.032469 118.511190 -0.019029 28.912854
1077 150.525566 152.430694 150.099878 151.531571 151.310718 7.580033e+07 150.032876 144.967153 -0.975625 3.719924 152.407001 137.527305 151.004072 5.014296 97.445324 -0.057289 109.229873 -0.033274 116.914063 -0.019528 29.716707
1078 149.301052 151.688142 148.723104 151.137179 150.916905 1.012990e+08 150.349418 145.413317 -0.891585 3.905336 153.223988 137.602646 151.092810 5.011652 96.945977 -0.057811 108.195203 -0.034066 115.297171 -0.020004 30.096629
1079 149.321425 151.018197 148.455004 150.396057 150.176865 9.262134e+07 150.424479 145.823313 -0.852689 3.878291 153.579894 138.066731 150.628308 5.006660 96.445650 -0.058325 107.152874 -0.034844 113.661756 -0.020453 27.283213
In [ ]:
high_vol.head(10)
Out[ ]:
open high low close Adj Close volume MA7 MA21 MACD 20SD upper_band lower_band EMA logmomentum absolute of 3 comp angle of 3 comp absolute of 6 comp angle of 6 comp absolute of 9 comp angle of 9 comp search
0 36.220001 36.325001 35.775002 35.875000 34.054882 57111200.0 36.173571 36.751904 0.303356 0.960520 38.672945 34.830864 35.924548 3.551770 38.458011 0.046984 29.704545 0.102857 43.304973 -0.053955 15.0
1 35.922501 36.197498 35.680000 36.022499 34.194897 86278400.0 36.095357 36.634762 0.328795 0.852735 38.340231 34.929292 35.989849 3.555991 38.240991 0.049445 29.954520 0.099254 43.438321 -0.053936 15.0
2 35.755001 35.875000 35.602501 35.682499 33.872143 96515200.0 35.984999 36.495238 0.346702 0.677629 37.850495 35.139980 35.784949 3.546235 38.027974 0.051918 30.209839 0.095602 43.557403 -0.053820 15.0
3 35.724998 36.187500 35.724998 36.044998 34.216255 76806800.0 36.001071 36.362023 0.387422 0.387634 37.137291 35.586756 35.958315 3.556633 37.818962 0.054401 30.470232 0.091907 43.662260 -0.053608 15.0
4 36.027500 36.487499 35.842499 36.264999 34.425095 84362400.0 35.973571 36.243809 0.388315 0.308042 36.859893 35.627725 36.162771 3.562891 37.613953 0.056893 30.735430 0.088177 43.752965 -0.053302 14.0
5 36.182499 36.462502 36.095001 36.382500 34.536625 79127200.0 36.039642 36.202738 0.372153 0.308860 36.820458 35.585018 36.309257 3.566217 37.412947 0.059392 31.005161 0.084416 43.829622 -0.052901 14.0
6 36.467499 36.544998 36.205002 36.435001 34.586472 99538000.0 36.101071 36.206547 0.317572 0.295861 36.798268 35.614826 36.393086 3.567700 37.215939 0.061899 31.279154 0.080632 43.892360 -0.052406 14.0
7 36.375000 37.122501 36.360001 36.942501 35.068211 100797600.0 36.253571 36.220595 0.322643 0.340687 36.901969 35.539221 36.759363 3.581920 37.022928 0.064410 31.557136 0.076830 43.941338 -0.051818 14.0
8 36.992500 37.332500 36.832500 37.259998 35.369610 80528400.0 36.430357 36.266785 0.257925 0.410484 37.087753 35.445818 37.093120 3.590715 36.833908 0.066926 31.838833 0.073014 43.976744 -0.051137 14.0
9 37.205002 37.724998 37.142502 37.389999 35.493000 95174000.0 36.674285 36.329523 0.184267 0.445597 37.220717 35.438330 37.291039 3.594294 36.648875 0.069445 32.123972 0.069192 43.998789 -0.050365 16.0

Common Functions

In [65]:
def get_arima(dataframe,original_data, train_len, test_len):
    # prepare train and test data
    X_value = pd.DataFrame(dataframe.iloc[:, :])
    y_value = pd.DataFrame(dataframe.iloc[:, 3])
    X_train, X_test = split_train_test(X_value)
    y_train, y_test = split_train_test(y_value)
    yc_train,yc_test = split_train_test(original_data)
    # y_train_ = y_train['close'].to_list()
    # y_test_ = y_test['close'].to_list()
    yc = yc_test.values.tolist()
    y_train_list = y_train['close'].values.tolist() 
    y_test_list = y_test['close'].values.tolist()                                           
      
    # Initialize model
    model = auto_arima(y_train_list,trace=True, error_action='ignore', start_p=1,start_q=1,max_p=3,max_q=3,d=3,
                  suppress_warnings=True,stepwise=True,seasonal=True)
    print(model.summary())
        # Determine model parameters
    model.fit(y_train_list,disp= 0)
    order = model.get_params()['order']
    print('ARIMA order:', order, '\n')

        # Genereate predictions
    prediction = []
    for i in range(len(y_test_list)):
            model = pmdarima.ARIMA(order=order)
            model.fit(y_train_list,disp= 0)
            # print('working on', i+1, 'of', len(y_test), '-- ' + str(int(100 * (i + 1) / len(y_test))) + '% complete')
            prediction.append(model.predict()[0])
            y_train_list.append(y_test_list[i])

    # Generate error data
    mse = mean_squared_error(yc_test, prediction)
    rmse = mse ** 0.5
    # mape = mean_absolute_percentage_error(pd.Series(yc_test).values.tolist(), pd.Series(predictionte).values.tolist() )
    mae = mean_absolute_error(pd.Series(yc_test).values.tolist() , pd.Series(prediction).values.tolist() )
    return yc, prediction, mse, rmse, mae
In [66]:
def plot_train(simulation,SIM):
  train_predict_index = np.load("index_train_appl.npy", allow_pickle=True)#Dates for train data

  predict_result = pd.DataFrame()
  for i in range(len(simulation[SIM]['final_tr']['prediction'])):
          y_predict = pd.DataFrame(simulation[SIM]['final_tr']['prediction'][i], columns=["predicted_price"],
                                  index=train_predict_index[i:i + output_dim])
          predict_result = pd.concat([predict_result, y_predict], axis=1, sort=False)
          
          #This is a dataframe with each column containing the predicted daily closing price
  real_price = pd.DataFrame()
  for i in range(len(simulation[SIM]['final_tr']['original'])):
          y_train = pd.DataFrame(simulation[SIM]['final_tr']['original'][i], columns=["real_price"],
                                index=train_predict_index[i:i + output_dim])
          real_price = pd.concat([real_price, y_train], axis=1, sort=False)  #This is a dataframe with each column containing the real daily closing price

  predict_result['predicted_mean'] = predict_result.mean(axis=1)#Adding a column with the daily predicted closing price value
  real_price['real_mean'] = real_price.mean(axis=1)#Adding a column with the daily real closing price value
      #
      # Plot the predicted result
  plt.figure(figsize=(16, 8))
  plt.plot(real_price["real_mean"])
  plt.plot(predict_result["predicted_mean"], color='r')
  plt.xlabel("Date")
  plt.ylabel("Stock price")
  plt.legend(("Real price", "Predicted price"), loc="upper left", fontsize=16)
  plt.title(f"The result of Training for Hybrid Arima LSTM with MA - {SIM} : {fileimg}",fontsize=20)
  sf = fileimg+'_'+SIM+'Train Hybrid Arima LSTM Pred Out.png'
  plt.savefig(sf,dpi='figure')
  plt.show()

      # Calculate RMSE
  predicted = predict_result["predicted_mean"]
  real = real_price["real_mean"]
  RMSE = np.sqrt(mean_squared_error(predicted, real))
  MSE = mean_squared_error(predicted, real)
  MAE = mean_absolute_error(predicted, real)
  print(f"----- Train RMSE for {SIM} -----", RMSE)
  print(f"----- Train_MSE_LSTM for {SIM} -----", MSE)
  print(f"----- Train MAE LSTM for {SIM} -----", MAE)
In [67]:
def plot_test(simulation, SIM):
  test_predict_index = np.load("index_test_appl.npy", allow_pickle=True)#Dates for train data

      # rescaled_real_y = y_scaler.inverse_transform(y_train)#Real closing price data
      # rescaled_predicted_y = y_scaler.inverse_transform(train_yhat)#Predicted closing price data

  predict_result = pd.DataFrame()
  for i in range(len(simulation[SIM]['final']['prediction'])):
          y_predict = pd.DataFrame(simulation[SIM]['final']['prediction'][i], columns=["predicted_price"],
                                  index=test_predict_index[i:i + output_dim])
          predict_result = pd.concat([predict_result, y_predict], axis=1, sort=False)#This is a dataframe with each column containing the predicted daily closing price
      #
  real_price = pd.DataFrame()
  for i in range(len(simulation[SIM]['final']['original'])):
          y_train = pd.DataFrame(simulation[SIM]['final']['original'][i], columns=["real_price"],
                                index=test_predict_index[i:i + output_dim])
          real_price = pd.concat([real_price, y_train], axis=1, sort=False)#This is a dataframe with each column containing the real daily closing price

  predict_result['predicted_mean'] = predict_result.mean(axis=1)#Adding a column with the daily predicted closing price value
  real_price['real_mean'] = real_price.mean(axis=1)#Adding a column with the daily real closing price value
      #
      # Plot the predicted result
  plt.figure(figsize=(16, 8))
  plt.plot(real_price["real_mean"])
  plt.plot(predict_result["predicted_mean"], color='r')
  plt.xlabel("Date")
  plt.ylabel("Stock price")
  plt.legend(("Real price", "Predicted price"), loc="upper left", fontsize=16)
  plt.title(f"The result of Testing for Hybrid Arima LSTM with MA - {SIM} : {fileimg}",fontsize=20)
  sf = fileimg+'_'+SIM+'Test Hybrid Arima LSTM Pred Out.png'
  plt.savefig(sf,dpi='figure')
  plt.show()

      # Calculate RMSE
  predicted = predict_result["predicted_mean"]
  real = real_price["real_mean"]
  RMSE = np.sqrt(mean_squared_error(predicted, real))
  MSE = mean_squared_error(predicted, real)
  MAE = mean_absolute_error(predicted, real)
  print(f"----- Test RMSE for {SIM}-----", RMSE)
  print(f"----- Test_MSE_LSTM for {SIM}-----", MSE)
  print(f"----- Test_MAE_LSTM for {SIM}-----", MAE)
In [68]:
def plot_train_high(simulation, SIM):
  train_predict_index = np.load("index_test_appl.npy", allow_pickle=True)#Dates for train data

  predict_result = pd.DataFrame()
  for i in range(len(simulation[SIM]['high_vol']['prediction'])):
          y_predict = pd.DataFrame(simulation[SIM]['high_vol']['prediction'][i], columns=["predicted_price"],
                                  index=train_predict_index[i:i + output_dim])
          predict_result = pd.concat([predict_result, y_predict], axis=1, sort=False)
          
          #This is a dataframe with each column containing the predicted daily closing price
  real_price = pd.DataFrame()
  for i in range(len(simulation[SIM]['high_vol']['original'])):
          y_train = pd.DataFrame(simulation[SIM]['high_vol']['original'][i], columns=["real_price"],
                                index=train_predict_index[i:i + output_dim])
          real_price = pd.concat([real_price, y_train], axis=1, sort=False)  #This is a dataframe with each column containing the real daily closing price

  predict_result['predicted_mean'] = predict_result.mean(axis=1)#Adding a column with the daily predicted closing price value
  real_price['real_mean'] = real_price.mean(axis=1)#Adding a column with the daily real closing price value
      #
      # Plot the predicted result
  plt.figure(figsize=(16, 8))
  plt.plot(real_price["real_mean"])
  plt.plot(predict_result["predicted_mean"], color='r')
  plt.xlabel("Date")
  plt.ylabel("Stock price")
  plt.legend(("Real price", "Predicted price"), loc="upper left", fontsize=16)
  plt.title(f"The result of Training for {SIM}", fontsize=20)
  plt.show()

      # Calculate RMSE
  predicted = predict_result["predicted_mean"]
  real = real_price["real_mean"]
  RMSE = np.sqrt(mean_squared_error(predicted, real))
  MSE = mean_squared_error(predicted, real)
  MAE = mean_absolute_error(predicted, real)
  print(f"----- Individual LSTM RMSE for {SIM} -----", RMSE)
  print(f"----- Individual LSTM_MSE_LSTM for {SIM} -----", MSE)
  print(f"----- Individual LSTM MAE LSTM for {SIM} -----", MAE)
In [69]:
def plot_train_low(simulation , SIM):
  train_predict_index = np.load("index_test_appl.npy", allow_pickle=True)#Dates for train data

  predict_result = pd.DataFrame()
  for i in range(len(simulation[SIM]['low_vol']['prediction'])):
          y_predict = pd.DataFrame(simulation[SIM]['low_vol']['prediction'][i], columns=["predicted_price"],
                                  index=train_predict_index[i:i + output_dim])
          predict_result = pd.concat([predict_result, y_predict], axis=1, sort=False)
          
          #This is a dataframe with each column containing the predicted daily closing price
  real_price = pd.DataFrame()
  for i in range(len(simulation[SIM]['low_vol']['original'])):
          y_train = pd.DataFrame(simulation[SIM]['low_vol']['original'][i], columns=["real_price"],
                                index=train_predict_index[i:i + output_dim])
          real_price = pd.concat([real_price, y_train], axis=1, sort=False)  #This is a dataframe with each column containing the real daily closing price

  predict_result['predicted_mean'] = predict_result.mean(axis=1)#Adding a column with the daily predicted closing price value
  real_price['real_mean'] = real_price.mean(axis=1)#Adding a column with the daily real closing price value
      #
      # Plot the predicted result
  plt.figure(figsize=(16, 8))
  plt.plot(real_price["real_mean"])
  plt.plot(predict_result["predicted_mean"], color='r')
  plt.xlabel("Date")
  plt.ylabel("Stock price")
  plt.legend(("Real price", "Predicted price"), loc="upper left", fontsize=16)
  plt.title(f"The result of Training for {SIM}", fontsize=20)
  plt.show()

      # Calculate RMSE
  predicted = predict_result["predicted_mean"]
  real = real_price["real_mean"]
  RMSE = np.sqrt(mean_squared_error(predicted, real))
  MSE = mean_squared_error(predicted, real)
  MAE = mean_absolute_error(predicted, real)
  print(f"-----Arima RMSE for {SIM} -----", RMSE)
  print(f"----- Arima MSE for {SIM} -----", MSE)
  print(f"----- Arima MAE for {SIM} -----", MAE)
In [70]:
import os
os.getcwd()
Out[70]:
'/content/drive/.shortcut-targets-by-id/1IaGjVBlTspxI2CHSrxfYnaiYvsaG0pHs/Stock price prediction/Archana - LSTM Hybrid/Outputs/Gtrends'

Univariate Arima Multistep MutiVariate LSTM Hybrid Model Experiment 1

In [ ]:
def get_lstm(data,original_data, train_len, test_len,img_file,ma ,lstm_len=3):
    # prepare train and test data
    X_value = pd.DataFrame(data.iloc[:, :])
    y_value = pd.DataFrame(data.iloc[:, 3])
    X_scaler = MinMaxScaler(feature_range=(-1, 1))
    y_scaler = MinMaxScaler(feature_range=(-1, 1))
    X_scaler.fit(X_value)
    y_scaler.fit(y_value)
    # Get data and check shape
    X, y, yc = get_X_y(X_scale_dataset, y_scale_dataset)#X will be of shape 224 X 3 X 21 (each 3 X 21 array will be 3 days' worth of data). yc will have the corresponding closing price value
    # pdb.set_trace()
    X_train, X_test, = split_train_test(X)
    y_train, y_test, = split_train_test(y)
    # yc_train, yc_test, = split_train_test(original_data)
    index_train, index_test, = predict_index(dataset_final, X_train, n_steps_in, n_steps_out)
    det = 20
    input_dim = X_train.shape[1]#3
    feature_size = X_train.shape[2]#24
    output_dim = y_train.shape[1]#1



    # Option 1
    # Set up & fit LSTM RNN
    model = Sequential()
    model.add(LSTM(256, activation='relu', kernel_initializer='he_normal', input_shape=(input_dim, feature_size)))
    model.add(Dense(units=64,activation='relu'))
    model.add(Dropout(0.5))
    model.add(Dense(units=output_dim))
    model.compile(optimizer=Adam(learning_rate = 0.001), loss='mse')

    ## Common code
    callbacks = [
    EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    fname1 = img_file+'.png'
    tensorflow.keras.utils.plot_model(
        model, to_file=fname1, show_shapes=True, show_dtype=False,
        show_layer_names=True, expand_nested=False, dpi=96,
        layer_range=None, show_layer_activations=False
    )
    history = model.fit(X_train, y_train, epochs=500, batch_size=int( optimized_period[ma]), verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # plot loss
    fname2 = img_file+'-'+ma
    plt.title(img_file+'-'+ma+' Loss')
    plt.xlabel("Epochs")
    plt.ylabel("Loss")
    pyplot.plot(history.history['loss'], label='train')
    pyplot.plot(history.history['val_loss'], label='validation')
    pyplot.legend()
    pyplot.savefig(fname2+'.png',dpi='figure')
    pyplot.show()


    # # option 2
    # model = Sequential()
    # model.add(Bidirectional(LSTM(units= 128), input_shape=(input_dim, feature_size)))
    # model.add(Dense(64))
    # model.add(Dense(units=output_dim))
    # model.compile(optimizer=Adam(lr = 0.001), loss='mean_squared_error', metrics=['accuracy'])
    # # Common code
    # callbacks = [
    # EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    # ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    # ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    # fname1 = img_file+'.png'
    # tensorflow.keras.utils.plot_model(
    #     model, to_file=fname1, show_shapes=True, show_dtype=False,
    #     show_layer_names=True, expand_nested=False, dpi=96,
    #     layer_range=None, show_layer_activations=False
    # )
    # history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # # plot loss
    # fname2 = img_file+'-'+ma
    # plt.title(img_file+'-'+ma_' Loss')
    # plt.xlabel("Epochs")
    # plt.ylabel("Loss")
    # pyplot.plot(history.history['loss'], label='train')
    # pyplot.plot(history.history['val_loss'], label='validation')
    # pyplot.legend()
    # pyplot.savefig(fname2+'.png',dpi='figure')
    # pyplot.show()

    # Option 3
    # define custom activation
    # cts().update({'double_tanh':Double_Tanh(double_tanh)})
    #     # Model Generation
    # model = Sequential()
    # #check https://machinelearningmastery.com/use-weight-regularization-lstm-networks-time-series-forecasting/
    # model.add(LSTM(25, input_shape=(input_dim, feature_size), dropout=0.2, kernel_regularizer=l1_l2(0.00,0.00), bias_regularizer=l1_l2(0.00,0.00)))
    # model.add(Dense(1))
    # model.add(Activation(double_tanh))
    # model.compile(loss='mean_squared_error', optimizer='adam', metrics=['mse', 'mae'])
    # # Common code
    # callbacks = [
    # EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    # ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    # ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    # fname1 = img_file+'.png'
    # tensorflow.keras.utils.plot_model(
    #     model, to_file=fname1, show_shapes=True, show_dtype=False,
    #     show_layer_names=True, expand_nested=False, dpi=96,
    #     layer_range=None, show_layer_activations=False
    # )
    # history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # # plot loss
    # fname2 = img_file+'-'+ma
    # plt.title(img_file+'-'+ma_' Loss')
    # plt.xlabel("Epochs")
    # plt.ylabel("Loss")
    # pyplot.plot(history.history['loss'], label='train')
    # pyplot.plot(history.history['val_loss'], label='validation')
    # pyplot.legend()
    # pyplot.savefig(fname2+'.png',dpi='figure')
    # pyplot.show()

    # Option 4
    # Set up & fit LSTM RNN
    # model = Sequential()
    # model.add(LSTM(units=lstm_len, return_sequences=True, input_shape=(x_train.shape[1], 1)))
    # model.add(LSTM(units=int(lstm_len/2)))
    # model.add(Dense(1, activation='sigmoid'))
    # model.compile(loss='mean_squared_error', optimizer='adam')
    # # Common code
    # callbacks = [
    # EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    # ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    # ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    # fname1 = img_file+'.png'
    # tensorflow.keras.utils.plot_model(
    #     model, to_file=fname1, show_shapes=True, show_dtype=False,
    #     show_layer_names=True, expand_nested=False, dpi=96,
    #     layer_range=None, show_layer_activations=False
    # )
    # history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # # plot loss
    # fname2 = img_file+'-'+ma
    # plt.title(img_file+'-'+ma_' Loss')
    # plt.xlabel("Epochs")
    # plt.ylabel("Loss")
    # pyplot.plot(history.history['loss'], label='train')
    # pyplot.plot(history.history['val_loss'], label='validation')
    # pyplot.legend()
    # pyplot.savefig(fname2+'.png',dpi='figure')
    # pyplot.show()



    # Generate predictions
    predictiontr = model.predict(X_train, verbose=0)
    predictiontr = y_scaler.inverse_transform(predictiontr).tolist()
    outputtr = []
    for i in range(len(predictiontr)):
        outputtr.extend(predictiontr[i])
    predictiontr = outputtr
    # Generate error data

    ## replace with yc , xtest generated by new multistep method
    mse_tr = mean_squared_error(y_train, predictiontr)
    rmse_tr = mse_tr ** 0.5
    # mape = mean_absolute_percentage_error(X_test, pd.Series(predictiontr))
    mae_tr = mean_absolute_error(y_train, pd.Series(predictiontr))
    # Original_tr = pd.Series(yc_train)
    Original_tr = y_scaler.inverse_transform(y_train).flatten().tolist()


    predictionte = model.predict(X_test, verbose=0)
    predictionte = (y_scaler.inverse_transform(predictionte)-det).tolist()
    outputte = []
    for i in range(len(predictionte)):
        outputte.extend(predictionte[i])
    predictionte = outputte
    # Generate error data

    mse_te = mean_squared_error(y_test, predictionte)
    rmse_te = mse_te ** 0.5
    # mape = mean_absolute_percentage_error(X_test, pd.Series(predictionte))
    mae_te = mean_absolute_error(y_test, pd.Series(predictionte))
    # Original_te = pd.Series(yc_test)
    Original_te = y_scaler.inverse_transform(y_test).flatten().tolist()

    return Original_tr, predictiontr, mse_tr, rmse_tr,mae_tr,Original_te,predictionte, mse_te,rmse_te,mae_te
In [ ]:
if __name__ == '__main__':
    start_time = timeit.default_timer()
    simulation1 = {}
    imgfile = 'Experiment1'
    for ma in optimized_period:
              print(ma)
              print(functions[ma])
              print ( int( optimized_period[ma]))
            # if ma == 'SMA':
              low_vol = df.apply(lambda c:  functions[ma](c, timeperiod = int( optimized_period[ma])))
              low_vol = low_vol.fillna(0)
              low_vol_data = df['close']
              high_vol = pd.DataFrame()
              df2 = df.copy()
              for i in df2.columns:
                if i in low_vol.columns:
                  high_vol[i] = df2[i].subtract(low_vol[i], fill_value=0)
              high_vol_data = df['close']
              ## *****************************************************
              # Generate ARIMA and LSTM predictions
              print('\nWorking on ' + ma + ' predictions')
              try:
                print('parameters used : ', train_len, test_len)
                low_vol_Original, low_vol_prediction, low_vol_mse, low_vol_rmse,low_vol_mae = get_arima(low_vol,low_vol_data, train_len, test_len)
              except:
                  print('ARIMA error, skipping to next MA type')
                  continue
              Original_tr, predictiontr, mse_tr, rmse_tr,mae_tr,high_vol_Original, high_vol_prediction, high_vol_mse, high_vol_rmse,high_vol_mae, = get_lstm(high_vol,high_vol_data, train_len, test_len,imgfile,ma)
              final_prediction_tr = df['close'].head(train_len).values + pd.Series(predictiontr) # ignoring first 3 steps 
              mse_ftr = mean_squared_error(df['close'].head(train_len).values,final_prediction_tr.values)
              rmse_ftr = mse_ftr ** 0.5
              mape_ftr = mean_absolute_percentage_error(df['close'].head(train_len).reset_index(drop=True), final_prediction_tr)
              mae_ftr = mean_absolute_error(df['close'].head(train_len).reset_index(drop=True), final_prediction_tr)

              final_prediction = pd.Series(low_vol_prediction[3:]) + pd.Series(high_vol_prediction)
              mse = mean_squared_error(df['close'].tail(test_len).values,final_prediction.values)
              rmse = mse ** 0.5
              mape = mean_absolute_percentage_error(df['close'].tail(test_len).reset_index(drop=True), final_prediction)
              mae = mean_absolute_error(df['close'].tail(test_len).reset_index(drop=True), final_prediction)
              # Generate prediction accuracy
              actual = df['close'].tail(test_len).values
              result_1 = []
              result_2 = []
              for i in range(1, len(final_prediction)):
                  # Compare prediction to previous close price
                  if final_prediction[i] > actual[i-1] and actual[i] > actual[i-1]:
                      result_1.append(1)
                  elif final_prediction[i] < actual[i-1] and actual[i] < actual[i-1]:
                      result_1.append(1)
                  else:
                      result_1.append(0)

                  # Compare prediction to previous prediction
                  if final_prediction[i] > final_prediction[i-1] and actual[i] > actual[i-1]:
                      result_2.append(1)
                  elif final_prediction[i] < final_prediction[i-1] and actual[i] < actual[i-1]:
                      result_2.append(1)
                  else:
                      result_2.append(0)

              accuracy_1 = np.mean(result_1)
              accuracy_2 = np.mean(result_2)

              simulation1[ma] = {'low_vol': {'original':list(low_vol_Original), 'prediction': list(low_vol_prediction) , 'mse': low_vol_mse,
                                            'rmse': low_vol_rmse, 'mae' : low_vol_mae},
                                'high_vol': {'original':list(high_vol_Original),'prediction': list(high_vol_prediction), 'mse': high_vol_mse,
                                            'rmse': high_vol_rmse, 'mae' : high_vol_mae},
                                'final_tr': {'original':df['close'].head(train_len).tolist(),'prediction': final_prediction_tr.values.tolist(), 'mse': mse_ftr,
                                            'rmse': rmse_ftr, 'mae' : mae_ftr},
                                'final': {'original': df['close'].tail(test_len).tolist(), 'prediction': final_prediction.values.tolist(), 'mse': mse,
                                          'rmse': rmse, 'mae': mae },
                                'accuracy': {'prediction vs close': accuracy_1, 'prediction vs prediction': accuracy_2}}

              # save simulation data here as checkpoint
              with open('simulation1_data.json', 'w') as fp:
                  json.dump(simulation1, fp)

              for ma in simulation1.keys():
                  print('\n' + ma)
                  print('Prediction vs Close:\t\t' + str(round(100*simulation1[ma]['accuracy']['prediction vs close'], 2))
                        + '% Accuracy')
                  print('Prediction vs Prediction:\t' + str(round(100*simulation1[ma]['accuracy']['prediction vs prediction'], 2))
                        + '% Accuracy')
                  print('MSE:\t', simulation1[ma]['final']['mse'],
                        '\nRMSE:\t', simulation1[ma]['final']['rmse'],
                        '\nMAPE:\t', simulation1[ma]['final']['mae'])#,
                        # '\nMAPE:\t', simulation[ma]['final']['mape'])
            # else:
            #   break
            # code you want to evaluate
    elapsed = timeit.default_timer() - start_time
    print('Runtime: mins:',elapsed/60)
SMA
SMA([input_arrays], [timeperiod=30])

Simple Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
17

Working on SMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.76 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4157.020, Time=0.04 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3687.148, Time=0.06 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.28 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3458.651, Time=0.10 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3322.133, Time=0.13 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=0.99 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=1.06 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3324.133, Time=0.28 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 3.706 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1657.067
Date:                Sun, 12 Dec 2021   AIC                           3322.133
Time:                        11:55:23   BIC                           3340.897
Sample:                             0   HQIC                          3329.339
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1966      0.003   -387.226      0.000      -1.203      -1.191
ar.L2         -0.8952      0.006   -138.692      0.000      -0.908      -0.883
ar.L3         -0.3968      0.006    -68.284      0.000      -0.408      -0.385
sigma2         3.5858      0.017    214.535      0.000       3.553       3.619
===================================================================================
Ljung-Box (L1) (Q):                  14.47   Jarque-Bera (JB):           2428881.42
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                             3.90
Prob(H) (two-sided):                  0.00   Kurtosis:                       271.99
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

WARNING:tensorflow:Layer lstm will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.04388, saving model to LSTM1.h5
48/48 - 4s - loss: 0.1044 - val_loss: 0.0439 - lr: 0.0010 - 4s/epoch - 86ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.04388
48/48 - 1s - loss: 0.0884 - val_loss: 0.1990 - lr: 0.0010 - 748ms/epoch - 16ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.04388
48/48 - 1s - loss: 0.1522 - val_loss: 0.9226 - lr: 0.0010 - 703ms/epoch - 15ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.04388
48/48 - 1s - loss: 0.0562 - val_loss: 0.2037 - lr: 0.0010 - 677ms/epoch - 14ms/step
Epoch 5/500

Epoch 00005: val_loss improved from 0.04388 to 0.03243, saving model to LSTM1.h5
48/48 - 1s - loss: 0.0460 - val_loss: 0.0324 - lr: 0.0010 - 739ms/epoch - 15ms/step
Epoch 6/500

Epoch 00006: val_loss did not improve from 0.03243
48/48 - 1s - loss: 0.0356 - val_loss: 0.0477 - lr: 0.0010 - 725ms/epoch - 15ms/step
Epoch 7/500

Epoch 00007: val_loss improved from 0.03243 to 0.02299, saving model to LSTM1.h5
48/48 - 1s - loss: 0.0379 - val_loss: 0.0230 - lr: 0.0010 - 743ms/epoch - 15ms/step
Epoch 8/500

Epoch 00008: val_loss improved from 0.02299 to 0.01352, saving model to LSTM1.h5
48/48 - 1s - loss: 0.0375 - val_loss: 0.0135 - lr: 0.0010 - 768ms/epoch - 16ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.01352
48/48 - 1s - loss: 0.0322 - val_loss: 0.0485 - lr: 0.0010 - 759ms/epoch - 16ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.01352
48/48 - 1s - loss: 0.0433 - val_loss: 0.1783 - lr: 0.0010 - 758ms/epoch - 16ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.01352
48/48 - 1s - loss: 0.0380 - val_loss: 0.3196 - lr: 0.0010 - 728ms/epoch - 15ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.01352
48/48 - 1s - loss: 0.0362 - val_loss: 0.0764 - lr: 0.0010 - 749ms/epoch - 16ms/step
Epoch 13/500

Epoch 00013: val_loss improved from 0.01352 to 0.00713, saving model to LSTM1.h5
48/48 - 1s - loss: 0.0345 - val_loss: 0.0071 - lr: 0.0010 - 768ms/epoch - 16ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.00713
48/48 - 1s - loss: 0.0355 - val_loss: 0.1058 - lr: 0.0010 - 705ms/epoch - 15ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.00713
48/48 - 1s - loss: 0.0363 - val_loss: 0.0092 - lr: 0.0010 - 700ms/epoch - 15ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.00713
48/48 - 1s - loss: 0.0346 - val_loss: 0.0292 - lr: 0.0010 - 700ms/epoch - 15ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.00713
48/48 - 1s - loss: 0.0413 - val_loss: 0.0217 - lr: 0.0010 - 687ms/epoch - 14ms/step
Epoch 18/500

Epoch 00018: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00018: val_loss did not improve from 0.00713
48/48 - 1s - loss: 0.0412 - val_loss: 0.0403 - lr: 0.0010 - 697ms/epoch - 15ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.00713
48/48 - 1s - loss: 0.0490 - val_loss: 0.0240 - lr: 1.0000e-04 - 761ms/epoch - 16ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.00713
48/48 - 1s - loss: 0.0303 - val_loss: 0.0284 - lr: 1.0000e-04 - 730ms/epoch - 15ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.00713
48/48 - 1s - loss: 0.0264 - val_loss: 0.0344 - lr: 1.0000e-04 - 720ms/epoch - 15ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.00713
48/48 - 1s - loss: 0.0239 - val_loss: 0.0400 - lr: 1.0000e-04 - 720ms/epoch - 15ms/step
Epoch 23/500

Epoch 00023: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00023: val_loss did not improve from 0.00713
48/48 - 1s - loss: 0.0233 - val_loss: 0.0489 - lr: 1.0000e-04 - 792ms/epoch - 16ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.00713
48/48 - 1s - loss: 0.0219 - val_loss: 0.0495 - lr: 1.0000e-05 - 776ms/epoch - 16ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.00713
48/48 - 1s - loss: 0.0249 - val_loss: 0.0501 - lr: 1.0000e-05 - 764ms/epoch - 16ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.00713
48/48 - 1s - loss: 0.0211 - val_loss: 0.0504 - lr: 1.0000e-05 - 748ms/epoch - 16ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.00713
48/48 - 1s - loss: 0.0215 - val_loss: 0.0513 - lr: 1.0000e-05 - 738ms/epoch - 15ms/step
Epoch 28/500

Epoch 00028: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00028: val_loss did not improve from 0.00713
48/48 - 1s - loss: 0.0230 - val_loss: 0.0509 - lr: 1.0000e-05 - 734ms/epoch - 15ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.00713
48/48 - 1s - loss: 0.0214 - val_loss: 0.0511 - lr: 1.0000e-05 - 791ms/epoch - 16ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.00713
48/48 - 1s - loss: 0.0247 - val_loss: 0.0505 - lr: 1.0000e-05 - 710ms/epoch - 15ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.00713
48/48 - 1s - loss: 0.0227 - val_loss: 0.0517 - lr: 1.0000e-05 - 771ms/epoch - 16ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.00713
48/48 - 1s - loss: 0.0240 - val_loss: 0.0515 - lr: 1.0000e-05 - 716ms/epoch - 15ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.00713
48/48 - 1s - loss: 0.0233 - val_loss: 0.0518 - lr: 1.0000e-05 - 735ms/epoch - 15ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.00713
48/48 - 1s - loss: 0.0235 - val_loss: 0.0508 - lr: 1.0000e-05 - 724ms/epoch - 15ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.00713
48/48 - 1s - loss: 0.0243 - val_loss: 0.0520 - lr: 1.0000e-05 - 749ms/epoch - 16ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.00713
48/48 - 1s - loss: 0.0223 - val_loss: 0.0546 - lr: 1.0000e-05 - 828ms/epoch - 17ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.00713
48/48 - 1s - loss: 0.0226 - val_loss: 0.0559 - lr: 1.0000e-05 - 754ms/epoch - 16ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.00713
48/48 - 1s - loss: 0.0236 - val_loss: 0.0569 - lr: 1.0000e-05 - 730ms/epoch - 15ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.00713
48/48 - 1s - loss: 0.0228 - val_loss: 0.0577 - lr: 1.0000e-05 - 714ms/epoch - 15ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.00713
48/48 - 1s - loss: 0.0216 - val_loss: 0.0567 - lr: 1.0000e-05 - 723ms/epoch - 15ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.00713
48/48 - 1s - loss: 0.0269 - val_loss: 0.0589 - lr: 1.0000e-05 - 745ms/epoch - 16ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.00713
48/48 - 1s - loss: 0.0227 - val_loss: 0.0590 - lr: 1.0000e-05 - 736ms/epoch - 15ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.00713
48/48 - 1s - loss: 0.0234 - val_loss: 0.0602 - lr: 1.0000e-05 - 741ms/epoch - 15ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.00713
48/48 - 1s - loss: 0.0239 - val_loss: 0.0624 - lr: 1.0000e-05 - 704ms/epoch - 15ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.00713
48/48 - 1s - loss: 0.0233 - val_loss: 0.0619 - lr: 1.0000e-05 - 724ms/epoch - 15ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.00713
48/48 - 1s - loss: 0.0218 - val_loss: 0.0605 - lr: 1.0000e-05 - 728ms/epoch - 15ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.00713
48/48 - 1s - loss: 0.0206 - val_loss: 0.0594 - lr: 1.0000e-05 - 727ms/epoch - 15ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.00713
48/48 - 1s - loss: 0.0232 - val_loss: 0.0592 - lr: 1.0000e-05 - 710ms/epoch - 15ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.00713
48/48 - 1s - loss: 0.0215 - val_loss: 0.0577 - lr: 1.0000e-05 - 750ms/epoch - 16ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.00713
48/48 - 1s - loss: 0.0207 - val_loss: 0.0575 - lr: 1.0000e-05 - 771ms/epoch - 16ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.00713
48/48 - 1s - loss: 0.0221 - val_loss: 0.0570 - lr: 1.0000e-05 - 753ms/epoch - 16ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.00713
48/48 - 1s - loss: 0.0240 - val_loss: 0.0578 - lr: 1.0000e-05 - 749ms/epoch - 16ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.00713
48/48 - 1s - loss: 0.0229 - val_loss: 0.0587 - lr: 1.0000e-05 - 736ms/epoch - 15ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.00713
48/48 - 1s - loss: 0.0226 - val_loss: 0.0599 - lr: 1.0000e-05 - 738ms/epoch - 15ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.00713
48/48 - 1s - loss: 0.0226 - val_loss: 0.0606 - lr: 1.0000e-05 - 748ms/epoch - 16ms/step
Epoch 56/500

Epoch 00056: val_loss did not improve from 0.00713
48/48 - 1s - loss: 0.0213 - val_loss: 0.0609 - lr: 1.0000e-05 - 762ms/epoch - 16ms/step
Epoch 57/500

Epoch 00057: val_loss did not improve from 0.00713
48/48 - 1s - loss: 0.0212 - val_loss: 0.0603 - lr: 1.0000e-05 - 788ms/epoch - 16ms/step
Epoch 58/500

Epoch 00058: val_loss did not improve from 0.00713
48/48 - 1s - loss: 0.0233 - val_loss: 0.0585 - lr: 1.0000e-05 - 685ms/epoch - 14ms/step
Epoch 59/500

Epoch 00059: val_loss did not improve from 0.00713
48/48 - 1s - loss: 0.0217 - val_loss: 0.0596 - lr: 1.0000e-05 - 728ms/epoch - 15ms/step
Epoch 60/500

Epoch 00060: val_loss did not improve from 0.00713
48/48 - 1s - loss: 0.0234 - val_loss: 0.0609 - lr: 1.0000e-05 - 743ms/epoch - 15ms/step
Epoch 61/500

Epoch 00061: val_loss did not improve from 0.00713
48/48 - 1s - loss: 0.0222 - val_loss: 0.0614 - lr: 1.0000e-05 - 709ms/epoch - 15ms/step
Epoch 62/500

Epoch 00062: val_loss did not improve from 0.00713
48/48 - 1s - loss: 0.0231 - val_loss: 0.0612 - lr: 1.0000e-05 - 738ms/epoch - 15ms/step
Epoch 63/500

Epoch 00063: val_loss did not improve from 0.00713
48/48 - 1s - loss: 0.0217 - val_loss: 0.0603 - lr: 1.0000e-05 - 737ms/epoch - 15ms/step
Epoch 00063: early stopping
SMA
Prediction vs Close:		52.61% Accuracy
Prediction vs Prediction:	46.27% Accuracy
MSE:	 216.26215770312788 
RMSE:	 14.705854538350632 
MAPE:	 11.92367463073216
EMA
EMA([input_arrays], [timeperiod=30])

Exponential Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
51

Working on EMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.58 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4231.556, Time=0.04 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3761.238, Time=0.06 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.40 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3532.227, Time=0.10 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3394.496, Time=0.15 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=1.14 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=0.89 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3396.496, Time=0.27 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 3.641 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1693.248
Date:                Sun, 12 Dec 2021   AIC                           3394.496
Time:                        11:58:20   BIC                           3413.260
Sample:                             0   HQIC                          3401.702
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1982      0.003   -389.569      0.000      -1.204      -1.192
ar.L2         -0.8976      0.006   -139.811      0.000      -0.910      -0.885
ar.L3         -0.3984      0.006    -68.662      0.000      -0.410      -0.387
sigma2         3.9230      0.018    215.372      0.000       3.887       3.959
===================================================================================
Ljung-Box (L1) (Q):                  14.54   Jarque-Bera (JB):           2462173.05
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                             3.90
Prob(H) (two-sided):                  0.00   Kurtosis:                       273.82
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

WARNING:tensorflow:Layer lstm_1 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.78839, saving model to LSTM1.h5
16/16 - 2s - loss: 0.3784 - val_loss: 0.7884 - lr: 0.0010 - 2s/epoch - 140ms/step
Epoch 2/500

Epoch 00002: val_loss improved from 0.78839 to 0.29586, saving model to LSTM1.h5
16/16 - 0s - loss: 0.0819 - val_loss: 0.2959 - lr: 0.0010 - 307ms/epoch - 19ms/step
Epoch 3/500

Epoch 00003: val_loss improved from 0.29586 to 0.02176, saving model to LSTM1.h5
16/16 - 0s - loss: 0.0443 - val_loss: 0.0218 - lr: 0.0010 - 296ms/epoch - 19ms/step
Epoch 4/500

Epoch 00004: val_loss improved from 0.02176 to 0.01234, saving model to LSTM1.h5
16/16 - 0s - loss: 0.0428 - val_loss: 0.0123 - lr: 0.0010 - 302ms/epoch - 19ms/step
Epoch 5/500

Epoch 00005: val_loss improved from 0.01234 to 0.01154, saving model to LSTM1.h5
16/16 - 0s - loss: 0.0448 - val_loss: 0.0115 - lr: 0.0010 - 303ms/epoch - 19ms/step
Epoch 6/500

Epoch 00006: val_loss improved from 0.01154 to 0.01000, saving model to LSTM1.h5
16/16 - 0s - loss: 0.0398 - val_loss: 0.0100 - lr: 0.0010 - 352ms/epoch - 22ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.01000
16/16 - 0s - loss: 0.0341 - val_loss: 0.0144 - lr: 0.0010 - 271ms/epoch - 17ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.01000
16/16 - 0s - loss: 0.0350 - val_loss: 0.0313 - lr: 0.0010 - 260ms/epoch - 16ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.01000
16/16 - 0s - loss: 0.0453 - val_loss: 0.0103 - lr: 0.0010 - 269ms/epoch - 17ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.01000
16/16 - 0s - loss: 0.0546 - val_loss: 0.0115 - lr: 0.0010 - 306ms/epoch - 19ms/step
Epoch 11/500

Epoch 00011: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00011: val_loss did not improve from 0.01000
16/16 - 0s - loss: 0.0359 - val_loss: 0.0135 - lr: 0.0010 - 281ms/epoch - 18ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.01000
16/16 - 0s - loss: 0.0346 - val_loss: 0.0124 - lr: 1.0000e-04 - 256ms/epoch - 16ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.01000
16/16 - 0s - loss: 0.0313 - val_loss: 0.0119 - lr: 1.0000e-04 - 299ms/epoch - 19ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.01000
16/16 - 0s - loss: 0.0293 - val_loss: 0.0127 - lr: 1.0000e-04 - 270ms/epoch - 17ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.01000
16/16 - 0s - loss: 0.0311 - val_loss: 0.0126 - lr: 1.0000e-04 - 264ms/epoch - 17ms/step
Epoch 16/500

Epoch 00016: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00016: val_loss did not improve from 0.01000
16/16 - 0s - loss: 0.0275 - val_loss: 0.0129 - lr: 1.0000e-04 - 315ms/epoch - 20ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.01000
16/16 - 0s - loss: 0.0291 - val_loss: 0.0129 - lr: 1.0000e-05 - 264ms/epoch - 17ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.01000
16/16 - 0s - loss: 0.0307 - val_loss: 0.0128 - lr: 1.0000e-05 - 283ms/epoch - 18ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.01000
16/16 - 0s - loss: 0.0299 - val_loss: 0.0128 - lr: 1.0000e-05 - 269ms/epoch - 17ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.01000
16/16 - 0s - loss: 0.0280 - val_loss: 0.0130 - lr: 1.0000e-05 - 266ms/epoch - 17ms/step
Epoch 21/500

Epoch 00021: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00021: val_loss did not improve from 0.01000
16/16 - 0s - loss: 0.0283 - val_loss: 0.0131 - lr: 1.0000e-05 - 277ms/epoch - 17ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.01000
16/16 - 0s - loss: 0.0271 - val_loss: 0.0131 - lr: 1.0000e-05 - 273ms/epoch - 17ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.01000
16/16 - 0s - loss: 0.0278 - val_loss: 0.0133 - lr: 1.0000e-05 - 273ms/epoch - 17ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.01000
16/16 - 0s - loss: 0.0314 - val_loss: 0.0134 - lr: 1.0000e-05 - 276ms/epoch - 17ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.01000
16/16 - 0s - loss: 0.0307 - val_loss: 0.0136 - lr: 1.0000e-05 - 280ms/epoch - 17ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.01000
16/16 - 0s - loss: 0.0287 - val_loss: 0.0136 - lr: 1.0000e-05 - 254ms/epoch - 16ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.01000
16/16 - 0s - loss: 0.0284 - val_loss: 0.0137 - lr: 1.0000e-05 - 273ms/epoch - 17ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.01000
16/16 - 0s - loss: 0.0259 - val_loss: 0.0137 - lr: 1.0000e-05 - 291ms/epoch - 18ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.01000
16/16 - 0s - loss: 0.0275 - val_loss: 0.0136 - lr: 1.0000e-05 - 266ms/epoch - 17ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.01000
16/16 - 0s - loss: 0.0279 - val_loss: 0.0135 - lr: 1.0000e-05 - 263ms/epoch - 16ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.01000
16/16 - 0s - loss: 0.0286 - val_loss: 0.0135 - lr: 1.0000e-05 - 292ms/epoch - 18ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.01000
16/16 - 0s - loss: 0.0275 - val_loss: 0.0134 - lr: 1.0000e-05 - 312ms/epoch - 19ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.01000
16/16 - 0s - loss: 0.0287 - val_loss: 0.0132 - lr: 1.0000e-05 - 285ms/epoch - 18ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.01000
16/16 - 0s - loss: 0.0307 - val_loss: 0.0133 - lr: 1.0000e-05 - 264ms/epoch - 16ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.01000
16/16 - 0s - loss: 0.0294 - val_loss: 0.0135 - lr: 1.0000e-05 - 276ms/epoch - 17ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.01000
16/16 - 0s - loss: 0.0297 - val_loss: 0.0137 - lr: 1.0000e-05 - 310ms/epoch - 19ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.01000
16/16 - 0s - loss: 0.0267 - val_loss: 0.0138 - lr: 1.0000e-05 - 314ms/epoch - 20ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.01000
16/16 - 0s - loss: 0.0296 - val_loss: 0.0139 - lr: 1.0000e-05 - 260ms/epoch - 16ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.01000
16/16 - 0s - loss: 0.0292 - val_loss: 0.0137 - lr: 1.0000e-05 - 284ms/epoch - 18ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.01000
16/16 - 0s - loss: 0.0284 - val_loss: 0.0136 - lr: 1.0000e-05 - 256ms/epoch - 16ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.01000
16/16 - 0s - loss: 0.0279 - val_loss: 0.0134 - lr: 1.0000e-05 - 279ms/epoch - 17ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.01000
16/16 - 0s - loss: 0.0285 - val_loss: 0.0136 - lr: 1.0000e-05 - 290ms/epoch - 18ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.01000
16/16 - 0s - loss: 0.0307 - val_loss: 0.0137 - lr: 1.0000e-05 - 256ms/epoch - 16ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.01000
16/16 - 0s - loss: 0.0285 - val_loss: 0.0138 - lr: 1.0000e-05 - 290ms/epoch - 18ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.01000
16/16 - 0s - loss: 0.0260 - val_loss: 0.0139 - lr: 1.0000e-05 - 318ms/epoch - 20ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.01000
16/16 - 0s - loss: 0.0296 - val_loss: 0.0136 - lr: 1.0000e-05 - 296ms/epoch - 19ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.01000
16/16 - 0s - loss: 0.0284 - val_loss: 0.0136 - lr: 1.0000e-05 - 278ms/epoch - 17ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.01000
16/16 - 0s - loss: 0.0293 - val_loss: 0.0137 - lr: 1.0000e-05 - 286ms/epoch - 18ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.01000
16/16 - 0s - loss: 0.0265 - val_loss: 0.0138 - lr: 1.0000e-05 - 324ms/epoch - 20ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.01000
16/16 - 0s - loss: 0.0277 - val_loss: 0.0141 - lr: 1.0000e-05 - 300ms/epoch - 19ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.01000
16/16 - 0s - loss: 0.0279 - val_loss: 0.0142 - lr: 1.0000e-05 - 316ms/epoch - 20ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.01000
16/16 - 0s - loss: 0.0289 - val_loss: 0.0142 - lr: 1.0000e-05 - 285ms/epoch - 18ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.01000
16/16 - 0s - loss: 0.0290 - val_loss: 0.0143 - lr: 1.0000e-05 - 255ms/epoch - 16ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.01000
16/16 - 0s - loss: 0.0272 - val_loss: 0.0144 - lr: 1.0000e-05 - 292ms/epoch - 18ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.01000
16/16 - 0s - loss: 0.0271 - val_loss: 0.0146 - lr: 1.0000e-05 - 277ms/epoch - 17ms/step
Epoch 56/500

Epoch 00056: val_loss did not improve from 0.01000
16/16 - 0s - loss: 0.0272 - val_loss: 0.0148 - lr: 1.0000e-05 - 291ms/epoch - 18ms/step
Epoch 00056: early stopping
SMA
Prediction vs Close:		52.61% Accuracy
Prediction vs Prediction:	46.27% Accuracy
MSE:	 216.26215770312788 
RMSE:	 14.705854538350632 
MAPE:	 11.92367463073216

EMA
Prediction vs Close:		54.1% Accuracy
Prediction vs Prediction:	46.27% Accuracy
MSE:	 122.96033735804009 
RMSE:	 11.08874823224155 
MAPE:	 9.251696034357076
WMA
WMA([input_arrays], [timeperiod=30])

Weighted Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
49

Working on WMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.58 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4264.089, Time=0.04 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3793.930, Time=0.06 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.34 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3564.923, Time=0.10 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3427.258, Time=0.12 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=1.69 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=0.61 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3429.258, Time=0.24 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 3.787 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1709.629
Date:                Sun, 12 Dec 2021   AIC                           3427.258
Time:                        12:00:12   BIC                           3446.021
Sample:                             0   HQIC                          3434.464
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1981      0.003   -389.386      0.000      -1.204      -1.192
ar.L2         -0.8974      0.006   -139.699      0.000      -0.910      -0.885
ar.L3         -0.3983      0.006    -68.737      0.000      -0.410      -0.387
sigma2         4.0860      0.019    215.311      0.000       4.049       4.123
===================================================================================
Ljung-Box (L1) (Q):                  14.57   Jarque-Bera (JB):           2460901.70
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                             3.90
Prob(H) (two-sided):                  0.00   Kurtosis:                       273.75
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

WARNING:tensorflow:Layer lstm_2 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.03569, saving model to LSTM1.h5
17/17 - 2s - loss: 0.2460 - val_loss: 0.0357 - lr: 0.0010 - 2s/epoch - 133ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.03569
17/17 - 0s - loss: 0.1094 - val_loss: 1.4134 - lr: 0.0010 - 303ms/epoch - 18ms/step
Epoch 3/500

Epoch 00003: val_loss improved from 0.03569 to 0.03486, saving model to LSTM1.h5
17/17 - 0s - loss: 0.0632 - val_loss: 0.0349 - lr: 0.0010 - 310ms/epoch - 18ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.03486
17/17 - 0s - loss: 0.0723 - val_loss: 0.0602 - lr: 0.0010 - 286ms/epoch - 17ms/step
Epoch 5/500

Epoch 00005: val_loss improved from 0.03486 to 0.01947, saving model to LSTM1.h5
17/17 - 0s - loss: 0.0427 - val_loss: 0.0195 - lr: 0.0010 - 318ms/epoch - 19ms/step
Epoch 6/500

Epoch 00006: val_loss improved from 0.01947 to 0.00806, saving model to LSTM1.h5
17/17 - 0s - loss: 0.0412 - val_loss: 0.0081 - lr: 0.0010 - 301ms/epoch - 18ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.00806
17/17 - 0s - loss: 0.0373 - val_loss: 0.0109 - lr: 0.0010 - 310ms/epoch - 18ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.00806
17/17 - 0s - loss: 0.0426 - val_loss: 0.0607 - lr: 0.0010 - 312ms/epoch - 18ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.00806
17/17 - 0s - loss: 0.0362 - val_loss: 0.0176 - lr: 0.0010 - 291ms/epoch - 17ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.00806
17/17 - 0s - loss: 0.0343 - val_loss: 0.0269 - lr: 0.0010 - 312ms/epoch - 18ms/step
Epoch 11/500

Epoch 00011: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00011: val_loss did not improve from 0.00806
17/17 - 0s - loss: 0.0343 - val_loss: 0.0276 - lr: 0.0010 - 278ms/epoch - 16ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.00806
17/17 - 0s - loss: 0.0320 - val_loss: 0.0273 - lr: 1.0000e-04 - 290ms/epoch - 17ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.00806
17/17 - 0s - loss: 0.0300 - val_loss: 0.0293 - lr: 1.0000e-04 - 287ms/epoch - 17ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.00806
17/17 - 0s - loss: 0.0323 - val_loss: 0.0290 - lr: 1.0000e-04 - 279ms/epoch - 16ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.00806
17/17 - 0s - loss: 0.0318 - val_loss: 0.0291 - lr: 1.0000e-04 - 305ms/epoch - 18ms/step
Epoch 16/500

Epoch 00016: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00016: val_loss did not improve from 0.00806
17/17 - 0s - loss: 0.0302 - val_loss: 0.0296 - lr: 1.0000e-04 - 281ms/epoch - 17ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.00806
17/17 - 0s - loss: 0.0328 - val_loss: 0.0294 - lr: 1.0000e-05 - 305ms/epoch - 18ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.00806
17/17 - 0s - loss: 0.0314 - val_loss: 0.0292 - lr: 1.0000e-05 - 263ms/epoch - 15ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.00806
17/17 - 0s - loss: 0.0324 - val_loss: 0.0290 - lr: 1.0000e-05 - 279ms/epoch - 16ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.00806
17/17 - 0s - loss: 0.0311 - val_loss: 0.0287 - lr: 1.0000e-05 - 293ms/epoch - 17ms/step
Epoch 21/500

Epoch 00021: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00021: val_loss did not improve from 0.00806
17/17 - 0s - loss: 0.0272 - val_loss: 0.0284 - lr: 1.0000e-05 - 278ms/epoch - 16ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.00806
17/17 - 0s - loss: 0.0348 - val_loss: 0.0283 - lr: 1.0000e-05 - 293ms/epoch - 17ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.00806
17/17 - 0s - loss: 0.0306 - val_loss: 0.0282 - lr: 1.0000e-05 - 279ms/epoch - 16ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.00806
17/17 - 0s - loss: 0.0301 - val_loss: 0.0281 - lr: 1.0000e-05 - 287ms/epoch - 17ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.00806
17/17 - 0s - loss: 0.0304 - val_loss: 0.0281 - lr: 1.0000e-05 - 301ms/epoch - 18ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.00806
17/17 - 0s - loss: 0.0313 - val_loss: 0.0280 - lr: 1.0000e-05 - 292ms/epoch - 17ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.00806
17/17 - 0s - loss: 0.0299 - val_loss: 0.0280 - lr: 1.0000e-05 - 290ms/epoch - 17ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.00806
17/17 - 0s - loss: 0.0331 - val_loss: 0.0283 - lr: 1.0000e-05 - 286ms/epoch - 17ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.00806
17/17 - 0s - loss: 0.0322 - val_loss: 0.0283 - lr: 1.0000e-05 - 299ms/epoch - 18ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.00806
17/17 - 0s - loss: 0.0309 - val_loss: 0.0283 - lr: 1.0000e-05 - 300ms/epoch - 18ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.00806
17/17 - 0s - loss: 0.0330 - val_loss: 0.0280 - lr: 1.0000e-05 - 290ms/epoch - 17ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.00806
17/17 - 0s - loss: 0.0301 - val_loss: 0.0278 - lr: 1.0000e-05 - 277ms/epoch - 16ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.00806
17/17 - 0s - loss: 0.0301 - val_loss: 0.0279 - lr: 1.0000e-05 - 283ms/epoch - 17ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.00806
17/17 - 0s - loss: 0.0314 - val_loss: 0.0278 - lr: 1.0000e-05 - 315ms/epoch - 19ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.00806
17/17 - 0s - loss: 0.0322 - val_loss: 0.0275 - lr: 1.0000e-05 - 294ms/epoch - 17ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.00806
17/17 - 0s - loss: 0.0306 - val_loss: 0.0277 - lr: 1.0000e-05 - 284ms/epoch - 17ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.00806
17/17 - 0s - loss: 0.0310 - val_loss: 0.0279 - lr: 1.0000e-05 - 282ms/epoch - 17ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.00806
17/17 - 0s - loss: 0.0331 - val_loss: 0.0278 - lr: 1.0000e-05 - 316ms/epoch - 19ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.00806
17/17 - 0s - loss: 0.0289 - val_loss: 0.0275 - lr: 1.0000e-05 - 310ms/epoch - 18ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.00806
17/17 - 0s - loss: 0.0318 - val_loss: 0.0274 - lr: 1.0000e-05 - 292ms/epoch - 17ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.00806
17/17 - 0s - loss: 0.0336 - val_loss: 0.0272 - lr: 1.0000e-05 - 276ms/epoch - 16ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.00806
17/17 - 0s - loss: 0.0314 - val_loss: 0.0271 - lr: 1.0000e-05 - 267ms/epoch - 16ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.00806
17/17 - 0s - loss: 0.0332 - val_loss: 0.0273 - lr: 1.0000e-05 - 297ms/epoch - 17ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.00806
17/17 - 0s - loss: 0.0308 - val_loss: 0.0274 - lr: 1.0000e-05 - 267ms/epoch - 16ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.00806
17/17 - 0s - loss: 0.0286 - val_loss: 0.0276 - lr: 1.0000e-05 - 277ms/epoch - 16ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.00806
17/17 - 0s - loss: 0.0331 - val_loss: 0.0276 - lr: 1.0000e-05 - 289ms/epoch - 17ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.00806
17/17 - 0s - loss: 0.0338 - val_loss: 0.0276 - lr: 1.0000e-05 - 279ms/epoch - 16ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.00806
17/17 - 0s - loss: 0.0327 - val_loss: 0.0279 - lr: 1.0000e-05 - 287ms/epoch - 17ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.00806
17/17 - 0s - loss: 0.0331 - val_loss: 0.0286 - lr: 1.0000e-05 - 268ms/epoch - 16ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.00806
17/17 - 0s - loss: 0.0341 - val_loss: 0.0287 - lr: 1.0000e-05 - 300ms/epoch - 18ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.00806
17/17 - 0s - loss: 0.0292 - val_loss: 0.0288 - lr: 1.0000e-05 - 290ms/epoch - 17ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.00806
17/17 - 0s - loss: 0.0316 - val_loss: 0.0293 - lr: 1.0000e-05 - 283ms/epoch - 17ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.00806
17/17 - 0s - loss: 0.0341 - val_loss: 0.0292 - lr: 1.0000e-05 - 290ms/epoch - 17ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.00806
17/17 - 0s - loss: 0.0284 - val_loss: 0.0290 - lr: 1.0000e-05 - 293ms/epoch - 17ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.00806
17/17 - 0s - loss: 0.0318 - val_loss: 0.0287 - lr: 1.0000e-05 - 267ms/epoch - 16ms/step
Epoch 56/500

Epoch 00056: val_loss did not improve from 0.00806
17/17 - 0s - loss: 0.0329 - val_loss: 0.0286 - lr: 1.0000e-05 - 299ms/epoch - 18ms/step
Epoch 00056: early stopping
SMA
Prediction vs Close:		52.61% Accuracy
Prediction vs Prediction:	46.27% Accuracy
MSE:	 216.26215770312788 
RMSE:	 14.705854538350632 
MAPE:	 11.92367463073216

EMA
Prediction vs Close:		54.1% Accuracy
Prediction vs Prediction:	46.27% Accuracy
MSE:	 122.96033735804009 
RMSE:	 11.08874823224155 
MAPE:	 9.251696034357076

WMA
Prediction vs Close:		53.73% Accuracy
Prediction vs Prediction:	46.64% Accuracy
MSE:	 56.80196950717294 
RMSE:	 7.5367081346681415 
MAPE:	 5.956122993340066
DEMA
DEMA([input_arrays], [timeperiod=30])

Double Exponential Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
89

Working on DEMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.55 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4436.126, Time=0.04 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3965.317, Time=0.06 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.53 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3736.589, Time=0.09 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3598.951, Time=0.11 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=1.27 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=1.26 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3600.951, Time=0.60 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 4.515 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1795.475
Date:                Sun, 12 Dec 2021   AIC                           3598.951
Time:                        12:02:04   BIC                           3617.714
Sample:                             0   HQIC                          3606.157
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1983      0.003   -389.581      0.000      -1.204      -1.192
ar.L2         -0.8973      0.006   -139.732      0.000      -0.910      -0.885
ar.L3         -0.3983      0.006    -68.649      0.000      -0.410      -0.387
sigma2         5.0573      0.023    215.292      0.000       5.011       5.103
===================================================================================
Ljung-Box (L1) (Q):                  14.41   Jarque-Bera (JB):           2460553.80
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                             3.89
Prob(H) (two-sided):                  0.00   Kurtosis:                       273.74
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

WARNING:tensorflow:Layer lstm_3 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.24906, saving model to LSTM1.h5
10/10 - 2s - loss: 0.4063 - val_loss: 0.2491 - lr: 0.0010 - 2s/epoch - 211ms/step
Epoch 2/500

Epoch 00002: val_loss improved from 0.24906 to 0.02406, saving model to LSTM1.h5
10/10 - 0s - loss: 0.1642 - val_loss: 0.0241 - lr: 0.0010 - 228ms/epoch - 23ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.02406
10/10 - 0s - loss: 0.0864 - val_loss: 0.3861 - lr: 0.0010 - 175ms/epoch - 18ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.02406
10/10 - 0s - loss: 0.0632 - val_loss: 0.1415 - lr: 0.0010 - 183ms/epoch - 18ms/step
Epoch 5/500

Epoch 00005: val_loss improved from 0.02406 to 0.01876, saving model to LSTM1.h5
10/10 - 0s - loss: 0.0590 - val_loss: 0.0188 - lr: 0.0010 - 225ms/epoch - 23ms/step
Epoch 6/500

Epoch 00006: val_loss did not improve from 0.01876
10/10 - 0s - loss: 0.0546 - val_loss: 0.0764 - lr: 0.0010 - 187ms/epoch - 19ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.01876
10/10 - 0s - loss: 0.0422 - val_loss: 0.1088 - lr: 0.0010 - 192ms/epoch - 19ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.01876
10/10 - 0s - loss: 0.0399 - val_loss: 0.0605 - lr: 0.0010 - 186ms/epoch - 19ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.01876
10/10 - 0s - loss: 0.0353 - val_loss: 0.0408 - lr: 0.0010 - 192ms/epoch - 19ms/step
Epoch 10/500

Epoch 00010: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00010: val_loss did not improve from 0.01876
10/10 - 0s - loss: 0.0326 - val_loss: 0.0544 - lr: 0.0010 - 188ms/epoch - 19ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.01876
10/10 - 0s - loss: 0.0312 - val_loss: 0.0509 - lr: 1.0000e-04 - 181ms/epoch - 18ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.01876
10/10 - 0s - loss: 0.0296 - val_loss: 0.0496 - lr: 1.0000e-04 - 190ms/epoch - 19ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.01876
10/10 - 0s - loss: 0.0280 - val_loss: 0.0484 - lr: 1.0000e-04 - 182ms/epoch - 18ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.01876
10/10 - 0s - loss: 0.0287 - val_loss: 0.0494 - lr: 1.0000e-04 - 189ms/epoch - 19ms/step
Epoch 15/500

Epoch 00015: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00015: val_loss did not improve from 0.01876
10/10 - 0s - loss: 0.0273 - val_loss: 0.0539 - lr: 1.0000e-04 - 193ms/epoch - 19ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.01876
10/10 - 0s - loss: 0.0292 - val_loss: 0.0539 - lr: 1.0000e-05 - 196ms/epoch - 20ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.01876
10/10 - 0s - loss: 0.0278 - val_loss: 0.0535 - lr: 1.0000e-05 - 194ms/epoch - 19ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.01876
10/10 - 0s - loss: 0.0277 - val_loss: 0.0530 - lr: 1.0000e-05 - 173ms/epoch - 17ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.01876
10/10 - 0s - loss: 0.0298 - val_loss: 0.0525 - lr: 1.0000e-05 - 193ms/epoch - 19ms/step
Epoch 20/500

Epoch 00020: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00020: val_loss did not improve from 0.01876
10/10 - 0s - loss: 0.0287 - val_loss: 0.0522 - lr: 1.0000e-05 - 242ms/epoch - 24ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.01876
10/10 - 0s - loss: 0.0304 - val_loss: 0.0520 - lr: 1.0000e-05 - 185ms/epoch - 19ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.01876
10/10 - 0s - loss: 0.0275 - val_loss: 0.0515 - lr: 1.0000e-05 - 187ms/epoch - 19ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.01876
10/10 - 0s - loss: 0.0279 - val_loss: 0.0513 - lr: 1.0000e-05 - 170ms/epoch - 17ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.01876
10/10 - 0s - loss: 0.0314 - val_loss: 0.0509 - lr: 1.0000e-05 - 183ms/epoch - 18ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.01876
10/10 - 0s - loss: 0.0275 - val_loss: 0.0503 - lr: 1.0000e-05 - 190ms/epoch - 19ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.01876
10/10 - 0s - loss: 0.0280 - val_loss: 0.0496 - lr: 1.0000e-05 - 198ms/epoch - 20ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.01876
10/10 - 0s - loss: 0.0281 - val_loss: 0.0488 - lr: 1.0000e-05 - 166ms/epoch - 17ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.01876
10/10 - 0s - loss: 0.0293 - val_loss: 0.0485 - lr: 1.0000e-05 - 192ms/epoch - 19ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.01876
10/10 - 0s - loss: 0.0305 - val_loss: 0.0480 - lr: 1.0000e-05 - 192ms/epoch - 19ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.01876
10/10 - 0s - loss: 0.0283 - val_loss: 0.0475 - lr: 1.0000e-05 - 193ms/epoch - 19ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.01876
10/10 - 0s - loss: 0.0289 - val_loss: 0.0473 - lr: 1.0000e-05 - 203ms/epoch - 20ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.01876
10/10 - 0s - loss: 0.0307 - val_loss: 0.0473 - lr: 1.0000e-05 - 190ms/epoch - 19ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.01876
10/10 - 0s - loss: 0.0314 - val_loss: 0.0473 - lr: 1.0000e-05 - 193ms/epoch - 19ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.01876
10/10 - 0s - loss: 0.0279 - val_loss: 0.0468 - lr: 1.0000e-05 - 186ms/epoch - 19ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.01876
10/10 - 0s - loss: 0.0268 - val_loss: 0.0467 - lr: 1.0000e-05 - 177ms/epoch - 18ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.01876
10/10 - 0s - loss: 0.0255 - val_loss: 0.0469 - lr: 1.0000e-05 - 195ms/epoch - 20ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.01876
10/10 - 0s - loss: 0.0294 - val_loss: 0.0467 - lr: 1.0000e-05 - 174ms/epoch - 17ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.01876
10/10 - 0s - loss: 0.0271 - val_loss: 0.0463 - lr: 1.0000e-05 - 178ms/epoch - 18ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.01876
10/10 - 0s - loss: 0.0292 - val_loss: 0.0461 - lr: 1.0000e-05 - 181ms/epoch - 18ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.01876
10/10 - 0s - loss: 0.0291 - val_loss: 0.0456 - lr: 1.0000e-05 - 196ms/epoch - 20ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.01876
10/10 - 0s - loss: 0.0291 - val_loss: 0.0452 - lr: 1.0000e-05 - 220ms/epoch - 22ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.01876
10/10 - 0s - loss: 0.0270 - val_loss: 0.0447 - lr: 1.0000e-05 - 187ms/epoch - 19ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.01876
10/10 - 0s - loss: 0.0285 - val_loss: 0.0443 - lr: 1.0000e-05 - 180ms/epoch - 18ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.01876
10/10 - 0s - loss: 0.0288 - val_loss: 0.0441 - lr: 1.0000e-05 - 194ms/epoch - 19ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.01876
10/10 - 0s - loss: 0.0273 - val_loss: 0.0434 - lr: 1.0000e-05 - 186ms/epoch - 19ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.01876
10/10 - 0s - loss: 0.0309 - val_loss: 0.0426 - lr: 1.0000e-05 - 193ms/epoch - 19ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.01876
10/10 - 0s - loss: 0.0326 - val_loss: 0.0418 - lr: 1.0000e-05 - 197ms/epoch - 20ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.01876
10/10 - 0s - loss: 0.0284 - val_loss: 0.0415 - lr: 1.0000e-05 - 175ms/epoch - 18ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.01876
10/10 - 0s - loss: 0.0297 - val_loss: 0.0420 - lr: 1.0000e-05 - 186ms/epoch - 19ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.01876
10/10 - 0s - loss: 0.0293 - val_loss: 0.0421 - lr: 1.0000e-05 - 182ms/epoch - 18ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.01876
10/10 - 0s - loss: 0.0275 - val_loss: 0.0419 - lr: 1.0000e-05 - 195ms/epoch - 19ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.01876
10/10 - 0s - loss: 0.0271 - val_loss: 0.0418 - lr: 1.0000e-05 - 190ms/epoch - 19ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.01876
10/10 - 0s - loss: 0.0310 - val_loss: 0.0416 - lr: 1.0000e-05 - 182ms/epoch - 18ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.01876
10/10 - 0s - loss: 0.0271 - val_loss: 0.0417 - lr: 1.0000e-05 - 179ms/epoch - 18ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.01876
10/10 - 0s - loss: 0.0273 - val_loss: 0.0416 - lr: 1.0000e-05 - 185ms/epoch - 19ms/step
Epoch 00055: early stopping
SMA
Prediction vs Close:		52.61% Accuracy
Prediction vs Prediction:	46.27% Accuracy
MSE:	 216.26215770312788 
RMSE:	 14.705854538350632 
MAPE:	 11.92367463073216

EMA
Prediction vs Close:		54.1% Accuracy
Prediction vs Prediction:	46.27% Accuracy
MSE:	 122.96033735804009 
RMSE:	 11.08874823224155 
MAPE:	 9.251696034357076

WMA
Prediction vs Close:		53.73% Accuracy
Prediction vs Prediction:	46.64% Accuracy
MSE:	 56.80196950717294 
RMSE:	 7.5367081346681415 
MAPE:	 5.956122993340066

DEMA
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	42.16% Accuracy
MSE:	 120.51656357671082 
RMSE:	 10.978003624371365 
MAPE:	 9.343426819843298
KAMA
KAMA([input_arrays], [timeperiod=30])

Kaufman Adaptive Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
18

Working on KAMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.50 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4190.464, Time=0.04 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3724.371, Time=0.06 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.38 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3494.154, Time=0.10 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3357.435, Time=0.13 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=1.62 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=0.99 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3359.435, Time=0.27 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 4.093 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1674.717
Date:                Sun, 12 Dec 2021   AIC                           3357.435
Time:                        12:03:48   BIC                           3376.198
Sample:                             0   HQIC                          3364.641
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1955      0.003   -381.246      0.000      -1.202      -1.189
ar.L2         -0.8964      0.007   -135.835      0.000      -0.909      -0.883
ar.L3         -0.3971      0.006    -67.229      0.000      -0.409      -0.385
sigma2         3.7466      0.018    211.623      0.000       3.712       3.781
===================================================================================
Ljung-Box (L1) (Q):                  14.20   Jarque-Bera (JB):           2338363.32
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.01   Skew:                             3.76
Prob(H) (two-sided):                  0.00   Kurtosis:                       266.93
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

WARNING:tensorflow:Layer lstm_4 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.06791, saving model to LSTM1.h5
45/45 - 3s - loss: 0.4313 - val_loss: 0.0679 - lr: 0.0010 - 3s/epoch - 60ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.06791
45/45 - 1s - loss: 0.0693 - val_loss: 0.1067 - lr: 0.0010 - 714ms/epoch - 16ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.06791
45/45 - 1s - loss: 0.0562 - val_loss: 0.1727 - lr: 0.0010 - 802ms/epoch - 18ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.06791
45/45 - 1s - loss: 0.0732 - val_loss: 0.7143 - lr: 0.0010 - 751ms/epoch - 17ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.06791
45/45 - 1s - loss: 0.0487 - val_loss: 0.3365 - lr: 0.0010 - 725ms/epoch - 16ms/step
Epoch 6/500

Epoch 00006: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00006: val_loss did not improve from 0.06791
45/45 - 1s - loss: 0.0416 - val_loss: 0.1184 - lr: 0.0010 - 734ms/epoch - 16ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.06791
45/45 - 1s - loss: 0.0379 - val_loss: 0.0948 - lr: 1.0000e-04 - 721ms/epoch - 16ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.06791
45/45 - 1s - loss: 0.0372 - val_loss: 0.0775 - lr: 1.0000e-04 - 730ms/epoch - 16ms/step
Epoch 9/500

Epoch 00009: val_loss improved from 0.06791 to 0.06647, saving model to LSTM1.h5
45/45 - 1s - loss: 0.0373 - val_loss: 0.0665 - lr: 1.0000e-04 - 765ms/epoch - 17ms/step
Epoch 10/500

Epoch 00010: val_loss improved from 0.06647 to 0.05375, saving model to LSTM1.h5
45/45 - 1s - loss: 0.0354 - val_loss: 0.0537 - lr: 1.0000e-04 - 752ms/epoch - 17ms/step
Epoch 11/500

Epoch 00011: val_loss improved from 0.05375 to 0.04523, saving model to LSTM1.h5
45/45 - 1s - loss: 0.0356 - val_loss: 0.0452 - lr: 1.0000e-04 - 765ms/epoch - 17ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.04523
45/45 - 1s - loss: 0.0363 - val_loss: 0.0467 - lr: 1.0000e-04 - 698ms/epoch - 16ms/step
Epoch 13/500

Epoch 00013: val_loss improved from 0.04523 to 0.04262, saving model to LSTM1.h5
45/45 - 1s - loss: 0.0316 - val_loss: 0.0426 - lr: 1.0000e-04 - 799ms/epoch - 18ms/step
Epoch 14/500

Epoch 00014: val_loss improved from 0.04262 to 0.03912, saving model to LSTM1.h5
45/45 - 1s - loss: 0.0312 - val_loss: 0.0391 - lr: 1.0000e-04 - 716ms/epoch - 16ms/step
Epoch 15/500

Epoch 00015: val_loss improved from 0.03912 to 0.03484, saving model to LSTM1.h5
45/45 - 1s - loss: 0.0303 - val_loss: 0.0348 - lr: 1.0000e-04 - 734ms/epoch - 16ms/step
Epoch 16/500

Epoch 00016: val_loss improved from 0.03484 to 0.02902, saving model to LSTM1.h5
45/45 - 1s - loss: 0.0328 - val_loss: 0.0290 - lr: 1.0000e-04 - 741ms/epoch - 16ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.02902
45/45 - 1s - loss: 0.0356 - val_loss: 0.0350 - lr: 1.0000e-04 - 727ms/epoch - 16ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.02902
45/45 - 1s - loss: 0.0318 - val_loss: 0.0321 - lr: 1.0000e-04 - 731ms/epoch - 16ms/step
Epoch 19/500

Epoch 00019: val_loss improved from 0.02902 to 0.02669, saving model to LSTM1.h5
45/45 - 1s - loss: 0.0292 - val_loss: 0.0267 - lr: 1.0000e-04 - 732ms/epoch - 16ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.02669
45/45 - 1s - loss: 0.0297 - val_loss: 0.0281 - lr: 1.0000e-04 - 763ms/epoch - 17ms/step
Epoch 21/500

Epoch 00021: val_loss improved from 0.02669 to 0.02659, saving model to LSTM1.h5
45/45 - 1s - loss: 0.0306 - val_loss: 0.0266 - lr: 1.0000e-04 - 766ms/epoch - 17ms/step
Epoch 22/500

Epoch 00022: val_loss improved from 0.02659 to 0.02457, saving model to LSTM1.h5
45/45 - 1s - loss: 0.0284 - val_loss: 0.0246 - lr: 1.0000e-04 - 751ms/epoch - 17ms/step
Epoch 23/500

Epoch 00023: val_loss improved from 0.02457 to 0.02332, saving model to LSTM1.h5
45/45 - 1s - loss: 0.0299 - val_loss: 0.0233 - lr: 1.0000e-04 - 738ms/epoch - 16ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.02332
45/45 - 1s - loss: 0.0298 - val_loss: 0.0303 - lr: 1.0000e-04 - 721ms/epoch - 16ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.02332
45/45 - 1s - loss: 0.0289 - val_loss: 0.0270 - lr: 1.0000e-04 - 695ms/epoch - 15ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.02332
45/45 - 1s - loss: 0.0292 - val_loss: 0.0320 - lr: 1.0000e-04 - 785ms/epoch - 17ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.02332
45/45 - 1s - loss: 0.0272 - val_loss: 0.0283 - lr: 1.0000e-04 - 723ms/epoch - 16ms/step
Epoch 28/500

Epoch 00028: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00028: val_loss did not improve from 0.02332
45/45 - 1s - loss: 0.0312 - val_loss: 0.0236 - lr: 1.0000e-04 - 715ms/epoch - 16ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.02332
45/45 - 1s - loss: 0.0306 - val_loss: 0.0234 - lr: 1.0000e-05 - 742ms/epoch - 16ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.02332
45/45 - 1s - loss: 0.0285 - val_loss: 0.0245 - lr: 1.0000e-05 - 761ms/epoch - 17ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.02332
45/45 - 1s - loss: 0.0323 - val_loss: 0.0239 - lr: 1.0000e-05 - 728ms/epoch - 16ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.02332
45/45 - 1s - loss: 0.0290 - val_loss: 0.0244 - lr: 1.0000e-05 - 735ms/epoch - 16ms/step
Epoch 33/500

Epoch 00033: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00033: val_loss did not improve from 0.02332
45/45 - 1s - loss: 0.0275 - val_loss: 0.0250 - lr: 1.0000e-05 - 736ms/epoch - 16ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.02332
45/45 - 1s - loss: 0.0266 - val_loss: 0.0247 - lr: 1.0000e-05 - 744ms/epoch - 17ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.02332
45/45 - 1s - loss: 0.0279 - val_loss: 0.0250 - lr: 1.0000e-05 - 734ms/epoch - 16ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.02332
45/45 - 1s - loss: 0.0276 - val_loss: 0.0249 - lr: 1.0000e-05 - 713ms/epoch - 16ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.02332
45/45 - 1s - loss: 0.0287 - val_loss: 0.0254 - lr: 1.0000e-05 - 763ms/epoch - 17ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.02332
45/45 - 1s - loss: 0.0275 - val_loss: 0.0260 - lr: 1.0000e-05 - 717ms/epoch - 16ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.02332
45/45 - 1s - loss: 0.0295 - val_loss: 0.0267 - lr: 1.0000e-05 - 690ms/epoch - 15ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.02332
45/45 - 1s - loss: 0.0313 - val_loss: 0.0269 - lr: 1.0000e-05 - 787ms/epoch - 17ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.02332
45/45 - 1s - loss: 0.0277 - val_loss: 0.0268 - lr: 1.0000e-05 - 760ms/epoch - 17ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.02332
45/45 - 1s - loss: 0.0283 - val_loss: 0.0270 - lr: 1.0000e-05 - 739ms/epoch - 16ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.02332
45/45 - 1s - loss: 0.0272 - val_loss: 0.0270 - lr: 1.0000e-05 - 766ms/epoch - 17ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.02332
45/45 - 1s - loss: 0.0295 - val_loss: 0.0264 - lr: 1.0000e-05 - 740ms/epoch - 16ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.02332
45/45 - 1s - loss: 0.0296 - val_loss: 0.0263 - lr: 1.0000e-05 - 698ms/epoch - 16ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.02332
45/45 - 1s - loss: 0.0295 - val_loss: 0.0267 - lr: 1.0000e-05 - 694ms/epoch - 15ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.02332
45/45 - 1s - loss: 0.0280 - val_loss: 0.0263 - lr: 1.0000e-05 - 727ms/epoch - 16ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.02332
45/45 - 1s - loss: 0.0262 - val_loss: 0.0259 - lr: 1.0000e-05 - 732ms/epoch - 16ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.02332
45/45 - 1s - loss: 0.0286 - val_loss: 0.0265 - lr: 1.0000e-05 - 715ms/epoch - 16ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.02332
45/45 - 1s - loss: 0.0270 - val_loss: 0.0265 - lr: 1.0000e-05 - 715ms/epoch - 16ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.02332
45/45 - 1s - loss: 0.0273 - val_loss: 0.0258 - lr: 1.0000e-05 - 754ms/epoch - 17ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.02332
45/45 - 1s - loss: 0.0262 - val_loss: 0.0251 - lr: 1.0000e-05 - 771ms/epoch - 17ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.02332
45/45 - 1s - loss: 0.0297 - val_loss: 0.0242 - lr: 1.0000e-05 - 728ms/epoch - 16ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.02332
45/45 - 1s - loss: 0.0287 - val_loss: 0.0238 - lr: 1.0000e-05 - 765ms/epoch - 17ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.02332
45/45 - 1s - loss: 0.0267 - val_loss: 0.0235 - lr: 1.0000e-05 - 751ms/epoch - 17ms/step
Epoch 56/500

Epoch 00056: val_loss improved from 0.02332 to 0.02322, saving model to LSTM1.h5
45/45 - 1s - loss: 0.0265 - val_loss: 0.0232 - lr: 1.0000e-05 - 912ms/epoch - 20ms/step
Epoch 57/500

Epoch 00057: val_loss improved from 0.02322 to 0.02273, saving model to LSTM1.h5
45/45 - 1s - loss: 0.0281 - val_loss: 0.0227 - lr: 1.0000e-05 - 793ms/epoch - 18ms/step
Epoch 58/500

Epoch 00058: val_loss did not improve from 0.02273
45/45 - 1s - loss: 0.0288 - val_loss: 0.0228 - lr: 1.0000e-05 - 737ms/epoch - 16ms/step
Epoch 59/500

Epoch 00059: val_loss improved from 0.02273 to 0.02230, saving model to LSTM1.h5
45/45 - 1s - loss: 0.0266 - val_loss: 0.0223 - lr: 1.0000e-05 - 704ms/epoch - 16ms/step
Epoch 60/500

Epoch 00060: val_loss improved from 0.02230 to 0.02186, saving model to LSTM1.h5
45/45 - 1s - loss: 0.0273 - val_loss: 0.0219 - lr: 1.0000e-05 - 783ms/epoch - 17ms/step
Epoch 61/500

Epoch 00061: val_loss improved from 0.02186 to 0.02090, saving model to LSTM1.h5
45/45 - 1s - loss: 0.0288 - val_loss: 0.0209 - lr: 1.0000e-05 - 787ms/epoch - 17ms/step
Epoch 62/500

Epoch 00062: val_loss improved from 0.02090 to 0.02068, saving model to LSTM1.h5
45/45 - 1s - loss: 0.0285 - val_loss: 0.0207 - lr: 1.0000e-05 - 754ms/epoch - 17ms/step
Epoch 63/500

Epoch 00063: val_loss improved from 0.02068 to 0.02009, saving model to LSTM1.h5
45/45 - 1s - loss: 0.0313 - val_loss: 0.0201 - lr: 1.0000e-05 - 774ms/epoch - 17ms/step
Epoch 64/500

Epoch 00064: val_loss did not improve from 0.02009
45/45 - 1s - loss: 0.0274 - val_loss: 0.0203 - lr: 1.0000e-05 - 731ms/epoch - 16ms/step
Epoch 65/500

Epoch 00065: val_loss improved from 0.02009 to 0.01973, saving model to LSTM1.h5
45/45 - 1s - loss: 0.0287 - val_loss: 0.0197 - lr: 1.0000e-05 - 736ms/epoch - 16ms/step
Epoch 66/500

Epoch 00066: val_loss improved from 0.01973 to 0.01902, saving model to LSTM1.h5
45/45 - 1s - loss: 0.0277 - val_loss: 0.0190 - lr: 1.0000e-05 - 749ms/epoch - 17ms/step
Epoch 67/500

Epoch 00067: val_loss did not improve from 0.01902
45/45 - 1s - loss: 0.0298 - val_loss: 0.0192 - lr: 1.0000e-05 - 675ms/epoch - 15ms/step
Epoch 68/500

Epoch 00068: val_loss did not improve from 0.01902
45/45 - 1s - loss: 0.0270 - val_loss: 0.0191 - lr: 1.0000e-05 - 708ms/epoch - 16ms/step
Epoch 69/500

Epoch 00069: val_loss did not improve from 0.01902
45/45 - 1s - loss: 0.0272 - val_loss: 0.0196 - lr: 1.0000e-05 - 739ms/epoch - 16ms/step
Epoch 70/500

Epoch 00070: val_loss did not improve from 0.01902
45/45 - 1s - loss: 0.0317 - val_loss: 0.0202 - lr: 1.0000e-05 - 721ms/epoch - 16ms/step
Epoch 71/500

Epoch 00071: val_loss did not improve from 0.01902
45/45 - 1s - loss: 0.0282 - val_loss: 0.0206 - lr: 1.0000e-05 - 743ms/epoch - 17ms/step
Epoch 72/500

Epoch 00072: val_loss did not improve from 0.01902
45/45 - 1s - loss: 0.0279 - val_loss: 0.0205 - lr: 1.0000e-05 - 756ms/epoch - 17ms/step
Epoch 73/500

Epoch 00073: val_loss did not improve from 0.01902
45/45 - 1s - loss: 0.0267 - val_loss: 0.0210 - lr: 1.0000e-05 - 707ms/epoch - 16ms/step
Epoch 74/500

Epoch 00074: val_loss did not improve from 0.01902
45/45 - 1s - loss: 0.0256 - val_loss: 0.0210 - lr: 1.0000e-05 - 774ms/epoch - 17ms/step
Epoch 75/500

Epoch 00075: val_loss did not improve from 0.01902
45/45 - 1s - loss: 0.0258 - val_loss: 0.0217 - lr: 1.0000e-05 - 718ms/epoch - 16ms/step
Epoch 76/500

Epoch 00076: val_loss did not improve from 0.01902
45/45 - 1s - loss: 0.0255 - val_loss: 0.0212 - lr: 1.0000e-05 - 738ms/epoch - 16ms/step
Epoch 77/500

Epoch 00077: val_loss did not improve from 0.01902
45/45 - 1s - loss: 0.0272 - val_loss: 0.0206 - lr: 1.0000e-05 - 722ms/epoch - 16ms/step
Epoch 78/500

Epoch 00078: val_loss did not improve from 0.01902
45/45 - 1s - loss: 0.0263 - val_loss: 0.0205 - lr: 1.0000e-05 - 766ms/epoch - 17ms/step
Epoch 79/500

Epoch 00079: val_loss did not improve from 0.01902
45/45 - 1s - loss: 0.0276 - val_loss: 0.0200 - lr: 1.0000e-05 - 714ms/epoch - 16ms/step
Epoch 80/500

Epoch 00080: val_loss did not improve from 0.01902
45/45 - 1s - loss: 0.0281 - val_loss: 0.0198 - lr: 1.0000e-05 - 775ms/epoch - 17ms/step
Epoch 81/500

Epoch 00081: val_loss did not improve from 0.01902
45/45 - 1s - loss: 0.0280 - val_loss: 0.0196 - lr: 1.0000e-05 - 741ms/epoch - 16ms/step
Epoch 82/500

Epoch 00082: val_loss did not improve from 0.01902
45/45 - 1s - loss: 0.0287 - val_loss: 0.0192 - lr: 1.0000e-05 - 775ms/epoch - 17ms/step
Epoch 83/500

Epoch 00083: val_loss did not improve from 0.01902
45/45 - 1s - loss: 0.0251 - val_loss: 0.0196 - lr: 1.0000e-05 - 739ms/epoch - 16ms/step
Epoch 84/500

Epoch 00084: val_loss did not improve from 0.01902
45/45 - 1s - loss: 0.0280 - val_loss: 0.0196 - lr: 1.0000e-05 - 715ms/epoch - 16ms/step
Epoch 85/500

Epoch 00085: val_loss did not improve from 0.01902
45/45 - 1s - loss: 0.0267 - val_loss: 0.0204 - lr: 1.0000e-05 - 729ms/epoch - 16ms/step
Epoch 86/500

Epoch 00086: val_loss did not improve from 0.01902
45/45 - 1s - loss: 0.0278 - val_loss: 0.0205 - lr: 1.0000e-05 - 710ms/epoch - 16ms/step
Epoch 87/500

Epoch 00087: val_loss did not improve from 0.01902
45/45 - 1s - loss: 0.0263 - val_loss: 0.0209 - lr: 1.0000e-05 - 721ms/epoch - 16ms/step
Epoch 88/500

Epoch 00088: val_loss did not improve from 0.01902
45/45 - 1s - loss: 0.0264 - val_loss: 0.0209 - lr: 1.0000e-05 - 716ms/epoch - 16ms/step
Epoch 89/500

Epoch 00089: val_loss did not improve from 0.01902
45/45 - 1s - loss: 0.0272 - val_loss: 0.0200 - lr: 1.0000e-05 - 710ms/epoch - 16ms/step
Epoch 90/500

Epoch 00090: val_loss did not improve from 0.01902
45/45 - 1s - loss: 0.0275 - val_loss: 0.0197 - lr: 1.0000e-05 - 731ms/epoch - 16ms/step
Epoch 91/500

Epoch 00091: val_loss did not improve from 0.01902
45/45 - 1s - loss: 0.0245 - val_loss: 0.0201 - lr: 1.0000e-05 - 764ms/epoch - 17ms/step
Epoch 92/500

Epoch 00092: val_loss improved from 0.01902 to 0.01898, saving model to LSTM1.h5
45/45 - 1s - loss: 0.0278 - val_loss: 0.0190 - lr: 1.0000e-05 - 931ms/epoch - 21ms/step
Epoch 93/500

Epoch 00093: val_loss improved from 0.01898 to 0.01893, saving model to LSTM1.h5
45/45 - 1s - loss: 0.0284 - val_loss: 0.0189 - lr: 1.0000e-05 - 752ms/epoch - 17ms/step
Epoch 94/500

Epoch 00094: val_loss did not improve from 0.01893
45/45 - 1s - loss: 0.0278 - val_loss: 0.0197 - lr: 1.0000e-05 - 718ms/epoch - 16ms/step
Epoch 95/500

Epoch 00095: val_loss did not improve from 0.01893
45/45 - 1s - loss: 0.0268 - val_loss: 0.0191 - lr: 1.0000e-05 - 713ms/epoch - 16ms/step
Epoch 96/500

Epoch 00096: val_loss did not improve from 0.01893
45/45 - 1s - loss: 0.0274 - val_loss: 0.0197 - lr: 1.0000e-05 - 721ms/epoch - 16ms/step
Epoch 97/500

Epoch 00097: val_loss improved from 0.01893 to 0.01869, saving model to LSTM1.h5
45/45 - 1s - loss: 0.0289 - val_loss: 0.0187 - lr: 1.0000e-05 - 744ms/epoch - 17ms/step
Epoch 98/500

Epoch 00098: val_loss improved from 0.01869 to 0.01787, saving model to LSTM1.h5
45/45 - 1s - loss: 0.0242 - val_loss: 0.0179 - lr: 1.0000e-05 - 715ms/epoch - 16ms/step
Epoch 99/500

Epoch 00099: val_loss improved from 0.01787 to 0.01691, saving model to LSTM1.h5
45/45 - 1s - loss: 0.0270 - val_loss: 0.0169 - lr: 1.0000e-05 - 719ms/epoch - 16ms/step
Epoch 100/500

Epoch 00100: val_loss improved from 0.01691 to 0.01676, saving model to LSTM1.h5
45/45 - 1s - loss: 0.0277 - val_loss: 0.0168 - lr: 1.0000e-05 - 752ms/epoch - 17ms/step
Epoch 101/500

Epoch 00101: val_loss improved from 0.01676 to 0.01640, saving model to LSTM1.h5
45/45 - 1s - loss: 0.0262 - val_loss: 0.0164 - lr: 1.0000e-05 - 751ms/epoch - 17ms/step
Epoch 102/500

Epoch 00102: val_loss improved from 0.01640 to 0.01636, saving model to LSTM1.h5
45/45 - 1s - loss: 0.0288 - val_loss: 0.0164 - lr: 1.0000e-05 - 747ms/epoch - 17ms/step
Epoch 103/500

Epoch 00103: val_loss improved from 0.01636 to 0.01602, saving model to LSTM1.h5
45/45 - 1s - loss: 0.0272 - val_loss: 0.0160 - lr: 1.0000e-05 - 691ms/epoch - 15ms/step
Epoch 104/500

Epoch 00104: val_loss did not improve from 0.01602
45/45 - 1s - loss: 0.0269 - val_loss: 0.0170 - lr: 1.0000e-05 - 710ms/epoch - 16ms/step
Epoch 105/500

Epoch 00105: val_loss did not improve from 0.01602
45/45 - 1s - loss: 0.0253 - val_loss: 0.0167 - lr: 1.0000e-05 - 725ms/epoch - 16ms/step
Epoch 106/500

Epoch 00106: val_loss did not improve from 0.01602
45/45 - 1s - loss: 0.0247 - val_loss: 0.0163 - lr: 1.0000e-05 - 718ms/epoch - 16ms/step
Epoch 107/500

Epoch 00107: val_loss improved from 0.01602 to 0.01594, saving model to LSTM1.h5
45/45 - 1s - loss: 0.0257 - val_loss: 0.0159 - lr: 1.0000e-05 - 739ms/epoch - 16ms/step
Epoch 108/500

Epoch 00108: val_loss improved from 0.01594 to 0.01534, saving model to LSTM1.h5
45/45 - 1s - loss: 0.0254 - val_loss: 0.0153 - lr: 1.0000e-05 - 767ms/epoch - 17ms/step
Epoch 109/500

Epoch 00109: val_loss did not improve from 0.01534
45/45 - 1s - loss: 0.0254 - val_loss: 0.0161 - lr: 1.0000e-05 - 693ms/epoch - 15ms/step
Epoch 110/500

Epoch 00110: val_loss did not improve from 0.01534
45/45 - 1s - loss: 0.0256 - val_loss: 0.0162 - lr: 1.0000e-05 - 694ms/epoch - 15ms/step
Epoch 111/500

Epoch 00111: val_loss did not improve from 0.01534
45/45 - 1s - loss: 0.0255 - val_loss: 0.0170 - lr: 1.0000e-05 - 723ms/epoch - 16ms/step
Epoch 112/500

Epoch 00112: val_loss did not improve from 0.01534
45/45 - 1s - loss: 0.0282 - val_loss: 0.0180 - lr: 1.0000e-05 - 701ms/epoch - 16ms/step
Epoch 113/500

Epoch 00113: val_loss did not improve from 0.01534
45/45 - 1s - loss: 0.0281 - val_loss: 0.0180 - lr: 1.0000e-05 - 723ms/epoch - 16ms/step
Epoch 114/500

Epoch 00114: val_loss did not improve from 0.01534
45/45 - 1s - loss: 0.0262 - val_loss: 0.0185 - lr: 1.0000e-05 - 730ms/epoch - 16ms/step
Epoch 115/500

Epoch 00115: val_loss did not improve from 0.01534
45/45 - 1s - loss: 0.0278 - val_loss: 0.0180 - lr: 1.0000e-05 - 726ms/epoch - 16ms/step
Epoch 116/500

Epoch 00116: val_loss did not improve from 0.01534
45/45 - 1s - loss: 0.0268 - val_loss: 0.0185 - lr: 1.0000e-05 - 752ms/epoch - 17ms/step
Epoch 117/500

Epoch 00117: val_loss did not improve from 0.01534
45/45 - 1s - loss: 0.0276 - val_loss: 0.0180 - lr: 1.0000e-05 - 740ms/epoch - 16ms/step
Epoch 118/500

Epoch 00118: val_loss did not improve from 0.01534
45/45 - 1s - loss: 0.0245 - val_loss: 0.0177 - lr: 1.0000e-05 - 756ms/epoch - 17ms/step
Epoch 119/500

Epoch 00119: val_loss did not improve from 0.01534
45/45 - 1s - loss: 0.0255 - val_loss: 0.0171 - lr: 1.0000e-05 - 719ms/epoch - 16ms/step
Epoch 120/500

Epoch 00120: val_loss did not improve from 0.01534
45/45 - 1s - loss: 0.0277 - val_loss: 0.0169 - lr: 1.0000e-05 - 719ms/epoch - 16ms/step
Epoch 121/500

Epoch 00121: val_loss did not improve from 0.01534
45/45 - 1s - loss: 0.0252 - val_loss: 0.0172 - lr: 1.0000e-05 - 691ms/epoch - 15ms/step
Epoch 122/500

Epoch 00122: val_loss did not improve from 0.01534
45/45 - 1s - loss: 0.0271 - val_loss: 0.0175 - lr: 1.0000e-05 - 722ms/epoch - 16ms/step
Epoch 123/500

Epoch 00123: val_loss did not improve from 0.01534
45/45 - 1s - loss: 0.0281 - val_loss: 0.0170 - lr: 1.0000e-05 - 722ms/epoch - 16ms/step
Epoch 124/500

Epoch 00124: val_loss did not improve from 0.01534
45/45 - 1s - loss: 0.0267 - val_loss: 0.0169 - lr: 1.0000e-05 - 733ms/epoch - 16ms/step
Epoch 125/500

Epoch 00125: val_loss did not improve from 0.01534
45/45 - 1s - loss: 0.0245 - val_loss: 0.0156 - lr: 1.0000e-05 - 699ms/epoch - 16ms/step
Epoch 126/500

Epoch 00126: val_loss did not improve from 0.01534
45/45 - 1s - loss: 0.0258 - val_loss: 0.0156 - lr: 1.0000e-05 - 761ms/epoch - 17ms/step
Epoch 127/500

Epoch 00127: val_loss did not improve from 0.01534
45/45 - 1s - loss: 0.0271 - val_loss: 0.0156 - lr: 1.0000e-05 - 720ms/epoch - 16ms/step
Epoch 128/500

Epoch 00128: val_loss did not improve from 0.01534
45/45 - 1s - loss: 0.0276 - val_loss: 0.0161 - lr: 1.0000e-05 - 786ms/epoch - 17ms/step
Epoch 129/500

Epoch 00129: val_loss did not improve from 0.01534
45/45 - 1s - loss: 0.0277 - val_loss: 0.0155 - lr: 1.0000e-05 - 743ms/epoch - 17ms/step
Epoch 130/500

Epoch 00130: val_loss did not improve from 0.01534
45/45 - 1s - loss: 0.0243 - val_loss: 0.0160 - lr: 1.0000e-05 - 717ms/epoch - 16ms/step
Epoch 131/500

Epoch 00131: val_loss did not improve from 0.01534
45/45 - 1s - loss: 0.0263 - val_loss: 0.0165 - lr: 1.0000e-05 - 731ms/epoch - 16ms/step
Epoch 132/500

Epoch 00132: val_loss did not improve from 0.01534
45/45 - 1s - loss: 0.0244 - val_loss: 0.0160 - lr: 1.0000e-05 - 730ms/epoch - 16ms/step
Epoch 133/500

Epoch 00133: val_loss improved from 0.01534 to 0.01410, saving model to LSTM1.h5
45/45 - 1s - loss: 0.0230 - val_loss: 0.0141 - lr: 1.0000e-05 - 944ms/epoch - 21ms/step
Epoch 134/500

Epoch 00134: val_loss improved from 0.01410 to 0.01371, saving model to LSTM1.h5
45/45 - 1s - loss: 0.0243 - val_loss: 0.0137 - lr: 1.0000e-05 - 737ms/epoch - 16ms/step
Epoch 135/500

Epoch 00135: val_loss did not improve from 0.01371
45/45 - 1s - loss: 0.0259 - val_loss: 0.0138 - lr: 1.0000e-05 - 737ms/epoch - 16ms/step
Epoch 136/500

Epoch 00136: val_loss improved from 0.01371 to 0.01339, saving model to LSTM1.h5
45/45 - 1s - loss: 0.0256 - val_loss: 0.0134 - lr: 1.0000e-05 - 772ms/epoch - 17ms/step
Epoch 137/500

Epoch 00137: val_loss did not improve from 0.01339
45/45 - 1s - loss: 0.0269 - val_loss: 0.0143 - lr: 1.0000e-05 - 692ms/epoch - 15ms/step
Epoch 138/500

Epoch 00138: val_loss did not improve from 0.01339
45/45 - 1s - loss: 0.0249 - val_loss: 0.0156 - lr: 1.0000e-05 - 703ms/epoch - 16ms/step
Epoch 139/500

Epoch 00139: val_loss did not improve from 0.01339
45/45 - 1s - loss: 0.0262 - val_loss: 0.0149 - lr: 1.0000e-05 - 702ms/epoch - 16ms/step
Epoch 140/500

Epoch 00140: val_loss did not improve from 0.01339
45/45 - 1s - loss: 0.0244 - val_loss: 0.0155 - lr: 1.0000e-05 - 724ms/epoch - 16ms/step
Epoch 141/500

Epoch 00141: val_loss did not improve from 0.01339
45/45 - 1s - loss: 0.0263 - val_loss: 0.0173 - lr: 1.0000e-05 - 717ms/epoch - 16ms/step
Epoch 142/500

Epoch 00142: val_loss did not improve from 0.01339
45/45 - 1s - loss: 0.0269 - val_loss: 0.0175 - lr: 1.0000e-05 - 738ms/epoch - 16ms/step
Epoch 143/500

Epoch 00143: val_loss did not improve from 0.01339
45/45 - 1s - loss: 0.0258 - val_loss: 0.0169 - lr: 1.0000e-05 - 716ms/epoch - 16ms/step
Epoch 144/500

Epoch 00144: val_loss did not improve from 0.01339
45/45 - 1s - loss: 0.0248 - val_loss: 0.0161 - lr: 1.0000e-05 - 736ms/epoch - 16ms/step
Epoch 145/500

Epoch 00145: val_loss did not improve from 0.01339
45/45 - 1s - loss: 0.0242 - val_loss: 0.0146 - lr: 1.0000e-05 - 734ms/epoch - 16ms/step
Epoch 146/500

Epoch 00146: val_loss did not improve from 0.01339
45/45 - 1s - loss: 0.0253 - val_loss: 0.0137 - lr: 1.0000e-05 - 768ms/epoch - 17ms/step
Epoch 147/500

Epoch 00147: val_loss improved from 0.01339 to 0.01330, saving model to LSTM1.h5
45/45 - 1s - loss: 0.0269 - val_loss: 0.0133 - lr: 1.0000e-05 - 775ms/epoch - 17ms/step
Epoch 148/500

Epoch 00148: val_loss improved from 0.01330 to 0.01313, saving model to LSTM1.h5
45/45 - 1s - loss: 0.0252 - val_loss: 0.0131 - lr: 1.0000e-05 - 728ms/epoch - 16ms/step
Epoch 149/500

Epoch 00149: val_loss improved from 0.01313 to 0.01251, saving model to LSTM1.h5
45/45 - 1s - loss: 0.0263 - val_loss: 0.0125 - lr: 1.0000e-05 - 788ms/epoch - 18ms/step
Epoch 150/500

Epoch 00150: val_loss did not improve from 0.01251
45/45 - 1s - loss: 0.0222 - val_loss: 0.0129 - lr: 1.0000e-05 - 724ms/epoch - 16ms/step
Epoch 151/500

Epoch 00151: val_loss improved from 0.01251 to 0.01186, saving model to LSTM1.h5
45/45 - 1s - loss: 0.0274 - val_loss: 0.0119 - lr: 1.0000e-05 - 762ms/epoch - 17ms/step
Epoch 152/500

Epoch 00152: val_loss improved from 0.01186 to 0.01170, saving model to LSTM1.h5
45/45 - 1s - loss: 0.0272 - val_loss: 0.0117 - lr: 1.0000e-05 - 737ms/epoch - 16ms/step
Epoch 153/500

Epoch 00153: val_loss did not improve from 0.01170
45/45 - 1s - loss: 0.0253 - val_loss: 0.0123 - lr: 1.0000e-05 - 719ms/epoch - 16ms/step
Epoch 154/500

Epoch 00154: val_loss did not improve from 0.01170
45/45 - 1s - loss: 0.0228 - val_loss: 0.0118 - lr: 1.0000e-05 - 727ms/epoch - 16ms/step
Epoch 155/500

Epoch 00155: val_loss did not improve from 0.01170
45/45 - 1s - loss: 0.0253 - val_loss: 0.0122 - lr: 1.0000e-05 - 699ms/epoch - 16ms/step
Epoch 156/500

Epoch 00156: val_loss did not improve from 0.01170
45/45 - 1s - loss: 0.0238 - val_loss: 0.0132 - lr: 1.0000e-05 - 724ms/epoch - 16ms/step
Epoch 157/500

Epoch 00157: val_loss did not improve from 0.01170
45/45 - 1s - loss: 0.0234 - val_loss: 0.0141 - lr: 1.0000e-05 - 700ms/epoch - 16ms/step
Epoch 158/500

Epoch 00158: val_loss did not improve from 0.01170
45/45 - 1s - loss: 0.0259 - val_loss: 0.0138 - lr: 1.0000e-05 - 776ms/epoch - 17ms/step
Epoch 159/500

Epoch 00159: val_loss did not improve from 0.01170
45/45 - 1s - loss: 0.0258 - val_loss: 0.0139 - lr: 1.0000e-05 - 736ms/epoch - 16ms/step
Epoch 160/500

Epoch 00160: val_loss did not improve from 0.01170
45/45 - 1s - loss: 0.0267 - val_loss: 0.0135 - lr: 1.0000e-05 - 710ms/epoch - 16ms/step
Epoch 161/500

Epoch 00161: val_loss did not improve from 0.01170
45/45 - 1s - loss: 0.0226 - val_loss: 0.0142 - lr: 1.0000e-05 - 743ms/epoch - 17ms/step
Epoch 162/500

Epoch 00162: val_loss did not improve from 0.01170
45/45 - 1s - loss: 0.0253 - val_loss: 0.0146 - lr: 1.0000e-05 - 750ms/epoch - 17ms/step
Epoch 163/500

Epoch 00163: val_loss did not improve from 0.01170
45/45 - 1s - loss: 0.0259 - val_loss: 0.0145 - lr: 1.0000e-05 - 732ms/epoch - 16ms/step
Epoch 164/500

Epoch 00164: val_loss did not improve from 0.01170
45/45 - 1s - loss: 0.0264 - val_loss: 0.0147 - lr: 1.0000e-05 - 740ms/epoch - 16ms/step
Epoch 165/500

Epoch 00165: val_loss did not improve from 0.01170
45/45 - 1s - loss: 0.0237 - val_loss: 0.0141 - lr: 1.0000e-05 - 748ms/epoch - 17ms/step
Epoch 166/500

Epoch 00166: val_loss did not improve from 0.01170
45/45 - 1s - loss: 0.0266 - val_loss: 0.0141 - lr: 1.0000e-05 - 774ms/epoch - 17ms/step
Epoch 167/500

Epoch 00167: val_loss did not improve from 0.01170
45/45 - 1s - loss: 0.0264 - val_loss: 0.0140 - lr: 1.0000e-05 - 715ms/epoch - 16ms/step
Epoch 168/500

Epoch 00168: val_loss did not improve from 0.01170
45/45 - 1s - loss: 0.0266 - val_loss: 0.0136 - lr: 1.0000e-05 - 722ms/epoch - 16ms/step
Epoch 169/500

Epoch 00169: val_loss did not improve from 0.01170
45/45 - 1s - loss: 0.0239 - val_loss: 0.0127 - lr: 1.0000e-05 - 729ms/epoch - 16ms/step
Epoch 170/500

Epoch 00170: val_loss did not improve from 0.01170
45/45 - 1s - loss: 0.0243 - val_loss: 0.0126 - lr: 1.0000e-05 - 701ms/epoch - 16ms/step
Epoch 171/500

Epoch 00171: val_loss did not improve from 0.01170
45/45 - 1s - loss: 0.0266 - val_loss: 0.0124 - lr: 1.0000e-05 - 701ms/epoch - 16ms/step
Epoch 172/500

Epoch 00172: val_loss did not improve from 0.01170
45/45 - 1s - loss: 0.0253 - val_loss: 0.0128 - lr: 1.0000e-05 - 696ms/epoch - 15ms/step
Epoch 173/500

Epoch 00173: val_loss did not improve from 0.01170
45/45 - 1s - loss: 0.0242 - val_loss: 0.0133 - lr: 1.0000e-05 - 746ms/epoch - 17ms/step
Epoch 174/500

Epoch 00174: val_loss did not improve from 0.01170
45/45 - 1s - loss: 0.0226 - val_loss: 0.0130 - lr: 1.0000e-05 - 730ms/epoch - 16ms/step
Epoch 175/500

Epoch 00175: val_loss did not improve from 0.01170
45/45 - 1s - loss: 0.0235 - val_loss: 0.0139 - lr: 1.0000e-05 - 749ms/epoch - 17ms/step
Epoch 176/500

Epoch 00176: val_loss did not improve from 0.01170
45/45 - 1s - loss: 0.0241 - val_loss: 0.0131 - lr: 1.0000e-05 - 791ms/epoch - 18ms/step
Epoch 177/500

Epoch 00177: val_loss did not improve from 0.01170
45/45 - 1s - loss: 0.0228 - val_loss: 0.0127 - lr: 1.0000e-05 - 711ms/epoch - 16ms/step
Epoch 178/500

Epoch 00178: val_loss did not improve from 0.01170
45/45 - 1s - loss: 0.0246 - val_loss: 0.0128 - lr: 1.0000e-05 - 736ms/epoch - 16ms/step
Epoch 179/500

Epoch 00179: val_loss improved from 0.01170 to 0.01169, saving model to LSTM1.h5
45/45 - 1s - loss: 0.0246 - val_loss: 0.0117 - lr: 1.0000e-05 - 959ms/epoch - 21ms/step
Epoch 180/500

Epoch 00180: val_loss improved from 0.01169 to 0.01131, saving model to LSTM1.h5
45/45 - 1s - loss: 0.0272 - val_loss: 0.0113 - lr: 1.0000e-05 - 792ms/epoch - 18ms/step
Epoch 181/500

Epoch 00181: val_loss did not improve from 0.01131
45/45 - 1s - loss: 0.0259 - val_loss: 0.0115 - lr: 1.0000e-05 - 746ms/epoch - 17ms/step
Epoch 182/500

Epoch 00182: val_loss did not improve from 0.01131
45/45 - 1s - loss: 0.0268 - val_loss: 0.0117 - lr: 1.0000e-05 - 725ms/epoch - 16ms/step
Epoch 183/500

Epoch 00183: val_loss did not improve from 0.01131
45/45 - 1s - loss: 0.0252 - val_loss: 0.0118 - lr: 1.0000e-05 - 684ms/epoch - 15ms/step
Epoch 184/500

Epoch 00184: val_loss did not improve from 0.01131
45/45 - 1s - loss: 0.0265 - val_loss: 0.0117 - lr: 1.0000e-05 - 680ms/epoch - 15ms/step
Epoch 185/500

Epoch 00185: val_loss did not improve from 0.01131
45/45 - 1s - loss: 0.0229 - val_loss: 0.0114 - lr: 1.0000e-05 - 710ms/epoch - 16ms/step
Epoch 186/500

Epoch 00186: val_loss improved from 0.01131 to 0.01106, saving model to LSTM1.h5
45/45 - 1s - loss: 0.0238 - val_loss: 0.0111 - lr: 1.0000e-05 - 754ms/epoch - 17ms/step
Epoch 187/500

Epoch 00187: val_loss did not improve from 0.01106
45/45 - 1s - loss: 0.0266 - val_loss: 0.0113 - lr: 1.0000e-05 - 744ms/epoch - 17ms/step
Epoch 188/500

Epoch 00188: val_loss improved from 0.01106 to 0.01062, saving model to LSTM1.h5
45/45 - 1s - loss: 0.0267 - val_loss: 0.0106 - lr: 1.0000e-05 - 799ms/epoch - 18ms/step
Epoch 189/500

Epoch 00189: val_loss improved from 0.01062 to 0.01022, saving model to LSTM1.h5
45/45 - 1s - loss: 0.0240 - val_loss: 0.0102 - lr: 1.0000e-05 - 753ms/epoch - 17ms/step
Epoch 190/500

Epoch 00190: val_loss improved from 0.01022 to 0.01010, saving model to LSTM1.h5
45/45 - 1s - loss: 0.0240 - val_loss: 0.0101 - lr: 1.0000e-05 - 789ms/epoch - 18ms/step
Epoch 191/500

Epoch 00191: val_loss did not improve from 0.01010
45/45 - 1s - loss: 0.0245 - val_loss: 0.0104 - lr: 1.0000e-05 - 713ms/epoch - 16ms/step
Epoch 192/500

Epoch 00192: val_loss did not improve from 0.01010
45/45 - 1s - loss: 0.0250 - val_loss: 0.0109 - lr: 1.0000e-05 - 713ms/epoch - 16ms/step
Epoch 193/500

Epoch 00193: val_loss did not improve from 0.01010
45/45 - 1s - loss: 0.0250 - val_loss: 0.0116 - lr: 1.0000e-05 - 733ms/epoch - 16ms/step
Epoch 194/500

Epoch 00194: val_loss did not improve from 0.01010
45/45 - 1s - loss: 0.0252 - val_loss: 0.0121 - lr: 1.0000e-05 - 704ms/epoch - 16ms/step
Epoch 195/500

Epoch 00195: val_loss did not improve from 0.01010
45/45 - 1s - loss: 0.0241 - val_loss: 0.0125 - lr: 1.0000e-05 - 706ms/epoch - 16ms/step
Epoch 196/500

Epoch 00196: val_loss did not improve from 0.01010
45/45 - 1s - loss: 0.0275 - val_loss: 0.0138 - lr: 1.0000e-05 - 748ms/epoch - 17ms/step
Epoch 197/500

Epoch 00197: val_loss did not improve from 0.01010
45/45 - 1s - loss: 0.0253 - val_loss: 0.0125 - lr: 1.0000e-05 - 736ms/epoch - 16ms/step
Epoch 198/500

Epoch 00198: val_loss did not improve from 0.01010
45/45 - 1s - loss: 0.0268 - val_loss: 0.0114 - lr: 1.0000e-05 - 716ms/epoch - 16ms/step
Epoch 199/500

Epoch 00199: val_loss did not improve from 0.01010
45/45 - 1s - loss: 0.0241 - val_loss: 0.0109 - lr: 1.0000e-05 - 704ms/epoch - 16ms/step
Epoch 200/500

Epoch 00200: val_loss did not improve from 0.01010
45/45 - 1s - loss: 0.0238 - val_loss: 0.0109 - lr: 1.0000e-05 - 719ms/epoch - 16ms/step
Epoch 201/500

Epoch 00201: val_loss did not improve from 0.01010
45/45 - 1s - loss: 0.0282 - val_loss: 0.0105 - lr: 1.0000e-05 - 695ms/epoch - 15ms/step
Epoch 202/500

Epoch 00202: val_loss improved from 0.01010 to 0.01004, saving model to LSTM1.h5
45/45 - 1s - loss: 0.0242 - val_loss: 0.0100 - lr: 1.0000e-05 - 894ms/epoch - 20ms/step
Epoch 203/500

Epoch 00203: val_loss did not improve from 0.01004
45/45 - 1s - loss: 0.0227 - val_loss: 0.0106 - lr: 1.0000e-05 - 703ms/epoch - 16ms/step
Epoch 204/500

Epoch 00204: val_loss did not improve from 0.01004
45/45 - 1s - loss: 0.0241 - val_loss: 0.0110 - lr: 1.0000e-05 - 733ms/epoch - 16ms/step
Epoch 205/500

Epoch 00205: val_loss did not improve from 0.01004
45/45 - 1s - loss: 0.0239 - val_loss: 0.0111 - lr: 1.0000e-05 - 729ms/epoch - 16ms/step
Epoch 206/500

Epoch 00206: val_loss did not improve from 0.01004
45/45 - 1s - loss: 0.0249 - val_loss: 0.0108 - lr: 1.0000e-05 - 720ms/epoch - 16ms/step
Epoch 207/500

Epoch 00207: val_loss did not improve from 0.01004
45/45 - 1s - loss: 0.0230 - val_loss: 0.0105 - lr: 1.0000e-05 - 747ms/epoch - 17ms/step
Epoch 208/500

Epoch 00208: val_loss improved from 0.01004 to 0.01003, saving model to LSTM1.h5
45/45 - 1s - loss: 0.0240 - val_loss: 0.0100 - lr: 1.0000e-05 - 753ms/epoch - 17ms/step
Epoch 209/500

Epoch 00209: val_loss improved from 0.01003 to 0.00960, saving model to LSTM1.h5
45/45 - 1s - loss: 0.0271 - val_loss: 0.0096 - lr: 1.0000e-05 - 772ms/epoch - 17ms/step
Epoch 210/500

Epoch 00210: val_loss improved from 0.00960 to 0.00936, saving model to LSTM1.h5
45/45 - 1s - loss: 0.0235 - val_loss: 0.0094 - lr: 1.0000e-05 - 731ms/epoch - 16ms/step
Epoch 211/500

Epoch 00211: val_loss improved from 0.00936 to 0.00893, saving model to LSTM1.h5
45/45 - 1s - loss: 0.0237 - val_loss: 0.0089 - lr: 1.0000e-05 - 752ms/epoch - 17ms/step
Epoch 212/500

Epoch 00212: val_loss improved from 0.00893 to 0.00879, saving model to LSTM1.h5
45/45 - 1s - loss: 0.0209 - val_loss: 0.0088 - lr: 1.0000e-05 - 781ms/epoch - 17ms/step
Epoch 213/500

Epoch 00213: val_loss did not improve from 0.00879
45/45 - 1s - loss: 0.0227 - val_loss: 0.0091 - lr: 1.0000e-05 - 730ms/epoch - 16ms/step
Epoch 214/500

Epoch 00214: val_loss did not improve from 0.00879
45/45 - 1s - loss: 0.0222 - val_loss: 0.0092 - lr: 1.0000e-05 - 754ms/epoch - 17ms/step
Epoch 215/500

Epoch 00215: val_loss did not improve from 0.00879
45/45 - 1s - loss: 0.0230 - val_loss: 0.0094 - lr: 1.0000e-05 - 702ms/epoch - 16ms/step
Epoch 216/500

Epoch 00216: val_loss did not improve from 0.00879
45/45 - 1s - loss: 0.0245 - val_loss: 0.0090 - lr: 1.0000e-05 - 725ms/epoch - 16ms/step
Epoch 217/500

Epoch 00217: val_loss did not improve from 0.00879
45/45 - 1s - loss: 0.0241 - val_loss: 0.0090 - lr: 1.0000e-05 - 725ms/epoch - 16ms/step
Epoch 218/500

Epoch 00218: val_loss did not improve from 0.00879
45/45 - 1s - loss: 0.0237 - val_loss: 0.0092 - lr: 1.0000e-05 - 686ms/epoch - 15ms/step
Epoch 219/500

Epoch 00219: val_loss did not improve from 0.00879
45/45 - 1s - loss: 0.0230 - val_loss: 0.0089 - lr: 1.0000e-05 - 736ms/epoch - 16ms/step
Epoch 220/500

Epoch 00220: val_loss did not improve from 0.00879
45/45 - 1s - loss: 0.0247 - val_loss: 0.0094 - lr: 1.0000e-05 - 709ms/epoch - 16ms/step
Epoch 221/500

Epoch 00221: val_loss did not improve from 0.00879
45/45 - 1s - loss: 0.0225 - val_loss: 0.0091 - lr: 1.0000e-05 - 742ms/epoch - 16ms/step
Epoch 222/500

Epoch 00222: val_loss did not improve from 0.00879
45/45 - 1s - loss: 0.0234 - val_loss: 0.0090 - lr: 1.0000e-05 - 760ms/epoch - 17ms/step
Epoch 223/500

Epoch 00223: val_loss did not improve from 0.00879
45/45 - 1s - loss: 0.0242 - val_loss: 0.0091 - lr: 1.0000e-05 - 783ms/epoch - 17ms/step
Epoch 224/500

Epoch 00224: val_loss did not improve from 0.00879
45/45 - 1s - loss: 0.0204 - val_loss: 0.0099 - lr: 1.0000e-05 - 744ms/epoch - 17ms/step
Epoch 225/500

Epoch 00225: val_loss did not improve from 0.00879
45/45 - 1s - loss: 0.0229 - val_loss: 0.0099 - lr: 1.0000e-05 - 722ms/epoch - 16ms/step
Epoch 226/500

Epoch 00226: val_loss did not improve from 0.00879
45/45 - 1s - loss: 0.0240 - val_loss: 0.0093 - lr: 1.0000e-05 - 737ms/epoch - 16ms/step
Epoch 227/500

Epoch 00227: val_loss did not improve from 0.00879
45/45 - 1s - loss: 0.0229 - val_loss: 0.0093 - lr: 1.0000e-05 - 742ms/epoch - 16ms/step
Epoch 228/500

Epoch 00228: val_loss did not improve from 0.00879
45/45 - 1s - loss: 0.0217 - val_loss: 0.0095 - lr: 1.0000e-05 - 683ms/epoch - 15ms/step
Epoch 229/500

Epoch 00229: val_loss did not improve from 0.00879
45/45 - 1s - loss: 0.0214 - val_loss: 0.0097 - lr: 1.0000e-05 - 735ms/epoch - 16ms/step
Epoch 230/500

Epoch 00230: val_loss did not improve from 0.00879
45/45 - 1s - loss: 0.0242 - val_loss: 0.0097 - lr: 1.0000e-05 - 753ms/epoch - 17ms/step
Epoch 231/500

Epoch 00231: val_loss did not improve from 0.00879
45/45 - 1s - loss: 0.0215 - val_loss: 0.0100 - lr: 1.0000e-05 - 726ms/epoch - 16ms/step
Epoch 232/500

Epoch 00232: val_loss did not improve from 0.00879
45/45 - 1s - loss: 0.0257 - val_loss: 0.0097 - lr: 1.0000e-05 - 714ms/epoch - 16ms/step
Epoch 233/500

Epoch 00233: val_loss did not improve from 0.00879
45/45 - 1s - loss: 0.0219 - val_loss: 0.0096 - lr: 1.0000e-05 - 715ms/epoch - 16ms/step
Epoch 234/500

Epoch 00234: val_loss did not improve from 0.00879
45/45 - 1s - loss: 0.0238 - val_loss: 0.0098 - lr: 1.0000e-05 - 737ms/epoch - 16ms/step
Epoch 235/500

Epoch 00235: val_loss did not improve from 0.00879
45/45 - 1s - loss: 0.0254 - val_loss: 0.0101 - lr: 1.0000e-05 - 724ms/epoch - 16ms/step
Epoch 236/500

Epoch 00236: val_loss did not improve from 0.00879
45/45 - 1s - loss: 0.0219 - val_loss: 0.0101 - lr: 1.0000e-05 - 737ms/epoch - 16ms/step
Epoch 237/500

Epoch 00237: val_loss did not improve from 0.00879
45/45 - 1s - loss: 0.0236 - val_loss: 0.0095 - lr: 1.0000e-05 - 756ms/epoch - 17ms/step
Epoch 238/500

Epoch 00238: val_loss improved from 0.00879 to 0.00874, saving model to LSTM1.h5
45/45 - 1s - loss: 0.0232 - val_loss: 0.0087 - lr: 1.0000e-05 - 913ms/epoch - 20ms/step
Epoch 239/500

Epoch 00239: val_loss did not improve from 0.00874
45/45 - 1s - loss: 0.0206 - val_loss: 0.0088 - lr: 1.0000e-05 - 764ms/epoch - 17ms/step
Epoch 240/500

Epoch 00240: val_loss did not improve from 0.00874
45/45 - 1s - loss: 0.0215 - val_loss: 0.0090 - lr: 1.0000e-05 - 730ms/epoch - 16ms/step
Epoch 241/500

Epoch 00241: val_loss did not improve from 0.00874
45/45 - 1s - loss: 0.0231 - val_loss: 0.0089 - lr: 1.0000e-05 - 711ms/epoch - 16ms/step
Epoch 242/500

Epoch 00242: val_loss improved from 0.00874 to 0.00852, saving model to LSTM1.h5
45/45 - 1s - loss: 0.0252 - val_loss: 0.0085 - lr: 1.0000e-05 - 720ms/epoch - 16ms/step
Epoch 243/500

Epoch 00243: val_loss did not improve from 0.00852
45/45 - 1s - loss: 0.0241 - val_loss: 0.0086 - lr: 1.0000e-05 - 719ms/epoch - 16ms/step
Epoch 244/500

Epoch 00244: val_loss improved from 0.00852 to 0.00849, saving model to LSTM1.h5
45/45 - 1s - loss: 0.0232 - val_loss: 0.0085 - lr: 1.0000e-05 - 777ms/epoch - 17ms/step
Epoch 245/500

Epoch 00245: val_loss did not improve from 0.00849
45/45 - 1s - loss: 0.0238 - val_loss: 0.0093 - lr: 1.0000e-05 - 754ms/epoch - 17ms/step
Epoch 246/500

Epoch 00246: val_loss did not improve from 0.00849
45/45 - 1s - loss: 0.0225 - val_loss: 0.0095 - lr: 1.0000e-05 - 707ms/epoch - 16ms/step
Epoch 247/500

Epoch 00247: val_loss did not improve from 0.00849
45/45 - 1s - loss: 0.0214 - val_loss: 0.0097 - lr: 1.0000e-05 - 742ms/epoch - 16ms/step
Epoch 248/500

Epoch 00248: val_loss did not improve from 0.00849
45/45 - 1s - loss: 0.0213 - val_loss: 0.0094 - lr: 1.0000e-05 - 752ms/epoch - 17ms/step
Epoch 249/500

Epoch 00249: val_loss did not improve from 0.00849
45/45 - 1s - loss: 0.0250 - val_loss: 0.0092 - lr: 1.0000e-05 - 715ms/epoch - 16ms/step
Epoch 250/500

Epoch 00250: val_loss did not improve from 0.00849
45/45 - 1s - loss: 0.0204 - val_loss: 0.0086 - lr: 1.0000e-05 - 739ms/epoch - 16ms/step
Epoch 251/500

Epoch 00251: val_loss did not improve from 0.00849
45/45 - 1s - loss: 0.0222 - val_loss: 0.0086 - lr: 1.0000e-05 - 719ms/epoch - 16ms/step
Epoch 252/500

Epoch 00252: val_loss did not improve from 0.00849
45/45 - 1s - loss: 0.0226 - val_loss: 0.0090 - lr: 1.0000e-05 - 741ms/epoch - 16ms/step
Epoch 253/500

Epoch 00253: val_loss did not improve from 0.00849
45/45 - 1s - loss: 0.0213 - val_loss: 0.0099 - lr: 1.0000e-05 - 735ms/epoch - 16ms/step
Epoch 254/500

Epoch 00254: val_loss did not improve from 0.00849
45/45 - 1s - loss: 0.0224 - val_loss: 0.0106 - lr: 1.0000e-05 - 721ms/epoch - 16ms/step
Epoch 255/500

Epoch 00255: val_loss did not improve from 0.00849
45/45 - 1s - loss: 0.0239 - val_loss: 0.0097 - lr: 1.0000e-05 - 709ms/epoch - 16ms/step
Epoch 256/500

Epoch 00256: val_loss did not improve from 0.00849
45/45 - 1s - loss: 0.0220 - val_loss: 0.0096 - lr: 1.0000e-05 - 723ms/epoch - 16ms/step
Epoch 257/500

Epoch 00257: val_loss did not improve from 0.00849
45/45 - 1s - loss: 0.0239 - val_loss: 0.0096 - lr: 1.0000e-05 - 725ms/epoch - 16ms/step
Epoch 258/500

Epoch 00258: val_loss did not improve from 0.00849
45/45 - 1s - loss: 0.0222 - val_loss: 0.0096 - lr: 1.0000e-05 - 784ms/epoch - 17ms/step
Epoch 259/500

Epoch 00259: val_loss did not improve from 0.00849
45/45 - 1s - loss: 0.0213 - val_loss: 0.0090 - lr: 1.0000e-05 - 763ms/epoch - 17ms/step
Epoch 260/500

Epoch 00260: val_loss improved from 0.00849 to 0.00825, saving model to LSTM1.h5
45/45 - 1s - loss: 0.0219 - val_loss: 0.0082 - lr: 1.0000e-05 - 891ms/epoch - 20ms/step
Epoch 261/500

Epoch 00261: val_loss improved from 0.00825 to 0.00814, saving model to LSTM1.h5
45/45 - 1s - loss: 0.0210 - val_loss: 0.0081 - lr: 1.0000e-05 - 716ms/epoch - 16ms/step
Epoch 262/500

Epoch 00262: val_loss did not improve from 0.00814
45/45 - 1s - loss: 0.0214 - val_loss: 0.0082 - lr: 1.0000e-05 - 745ms/epoch - 17ms/step
Epoch 263/500

Epoch 00263: val_loss did not improve from 0.00814
45/45 - 1s - loss: 0.0224 - val_loss: 0.0082 - lr: 1.0000e-05 - 694ms/epoch - 15ms/step
Epoch 264/500

Epoch 00264: val_loss did not improve from 0.00814
45/45 - 1s - loss: 0.0201 - val_loss: 0.0086 - lr: 1.0000e-05 - 741ms/epoch - 16ms/step
Epoch 265/500

Epoch 00265: val_loss did not improve from 0.00814
45/45 - 1s - loss: 0.0216 - val_loss: 0.0087 - lr: 1.0000e-05 - 710ms/epoch - 16ms/step
Epoch 266/500

Epoch 00266: val_loss did not improve from 0.00814
45/45 - 1s - loss: 0.0217 - val_loss: 0.0083 - lr: 1.0000e-05 - 747ms/epoch - 17ms/step
Epoch 267/500

Epoch 00267: val_loss did not improve from 0.00814
45/45 - 1s - loss: 0.0204 - val_loss: 0.0082 - lr: 1.0000e-05 - 774ms/epoch - 17ms/step
Epoch 268/500

Epoch 00268: val_loss did not improve from 0.00814
45/45 - 1s - loss: 0.0229 - val_loss: 0.0085 - lr: 1.0000e-05 - 692ms/epoch - 15ms/step
Epoch 269/500

Epoch 00269: val_loss did not improve from 0.00814
45/45 - 1s - loss: 0.0219 - val_loss: 0.0087 - lr: 1.0000e-05 - 788ms/epoch - 18ms/step
Epoch 270/500

Epoch 00270: val_loss did not improve from 0.00814
45/45 - 1s - loss: 0.0200 - val_loss: 0.0087 - lr: 1.0000e-05 - 722ms/epoch - 16ms/step
Epoch 271/500

Epoch 00271: val_loss did not improve from 0.00814
45/45 - 1s - loss: 0.0212 - val_loss: 0.0083 - lr: 1.0000e-05 - 737ms/epoch - 16ms/step
Epoch 272/500

Epoch 00272: val_loss did not improve from 0.00814
45/45 - 1s - loss: 0.0233 - val_loss: 0.0082 - lr: 1.0000e-05 - 723ms/epoch - 16ms/step
Epoch 273/500

Epoch 00273: val_loss did not improve from 0.00814
45/45 - 1s - loss: 0.0203 - val_loss: 0.0082 - lr: 1.0000e-05 - 733ms/epoch - 16ms/step
Epoch 274/500

Epoch 00274: val_loss did not improve from 0.00814
45/45 - 1s - loss: 0.0199 - val_loss: 0.0085 - lr: 1.0000e-05 - 725ms/epoch - 16ms/step
Epoch 275/500

Epoch 00275: val_loss did not improve from 0.00814
45/45 - 1s - loss: 0.0205 - val_loss: 0.0085 - lr: 1.0000e-05 - 753ms/epoch - 17ms/step
Epoch 276/500

Epoch 00276: val_loss did not improve from 0.00814
45/45 - 1s - loss: 0.0227 - val_loss: 0.0086 - lr: 1.0000e-05 - 690ms/epoch - 15ms/step
Epoch 277/500

Epoch 00277: val_loss did not improve from 0.00814
45/45 - 1s - loss: 0.0213 - val_loss: 0.0091 - lr: 1.0000e-05 - 725ms/epoch - 16ms/step
Epoch 278/500

Epoch 00278: val_loss did not improve from 0.00814
45/45 - 1s - loss: 0.0204 - val_loss: 0.0089 - lr: 1.0000e-05 - 748ms/epoch - 17ms/step
Epoch 279/500

Epoch 00279: val_loss did not improve from 0.00814
45/45 - 1s - loss: 0.0201 - val_loss: 0.0090 - lr: 1.0000e-05 - 726ms/epoch - 16ms/step
Epoch 280/500

Epoch 00280: val_loss did not improve from 0.00814
45/45 - 1s - loss: 0.0219 - val_loss: 0.0089 - lr: 1.0000e-05 - 792ms/epoch - 18ms/step
Epoch 281/500

Epoch 00281: val_loss did not improve from 0.00814
45/45 - 1s - loss: 0.0214 - val_loss: 0.0087 - lr: 1.0000e-05 - 754ms/epoch - 17ms/step
Epoch 282/500

Epoch 00282: val_loss did not improve from 0.00814
45/45 - 1s - loss: 0.0210 - val_loss: 0.0085 - lr: 1.0000e-05 - 710ms/epoch - 16ms/step
Epoch 283/500

Epoch 00283: val_loss did not improve from 0.00814
45/45 - 1s - loss: 0.0232 - val_loss: 0.0087 - lr: 1.0000e-05 - 771ms/epoch - 17ms/step
Epoch 284/500

Epoch 00284: val_loss did not improve from 0.00814
45/45 - 1s - loss: 0.0227 - val_loss: 0.0085 - lr: 1.0000e-05 - 743ms/epoch - 17ms/step
Epoch 285/500

Epoch 00285: val_loss did not improve from 0.00814
45/45 - 1s - loss: 0.0216 - val_loss: 0.0085 - lr: 1.0000e-05 - 727ms/epoch - 16ms/step
Epoch 286/500

Epoch 00286: val_loss did not improve from 0.00814
45/45 - 1s - loss: 0.0199 - val_loss: 0.0084 - lr: 1.0000e-05 - 712ms/epoch - 16ms/step
Epoch 287/500

Epoch 00287: val_loss did not improve from 0.00814
45/45 - 1s - loss: 0.0204 - val_loss: 0.0086 - lr: 1.0000e-05 - 754ms/epoch - 17ms/step
Epoch 288/500

Epoch 00288: val_loss did not improve from 0.00814
45/45 - 1s - loss: 0.0221 - val_loss: 0.0085 - lr: 1.0000e-05 - 697ms/epoch - 15ms/step
Epoch 289/500

Epoch 00289: val_loss did not improve from 0.00814
45/45 - 1s - loss: 0.0219 - val_loss: 0.0086 - lr: 1.0000e-05 - 749ms/epoch - 17ms/step
Epoch 290/500

Epoch 00290: val_loss did not improve from 0.00814
45/45 - 1s - loss: 0.0221 - val_loss: 0.0085 - lr: 1.0000e-05 - 735ms/epoch - 16ms/step
Epoch 291/500

Epoch 00291: val_loss did not improve from 0.00814
45/45 - 1s - loss: 0.0204 - val_loss: 0.0086 - lr: 1.0000e-05 - 694ms/epoch - 15ms/step
Epoch 292/500

Epoch 00292: val_loss did not improve from 0.00814
45/45 - 1s - loss: 0.0229 - val_loss: 0.0085 - lr: 1.0000e-05 - 732ms/epoch - 16ms/step
Epoch 293/500

Epoch 00293: val_loss did not improve from 0.00814
45/45 - 1s - loss: 0.0219 - val_loss: 0.0084 - lr: 1.0000e-05 - 734ms/epoch - 16ms/step
Epoch 294/500

Epoch 00294: val_loss did not improve from 0.00814
45/45 - 1s - loss: 0.0212 - val_loss: 0.0084 - lr: 1.0000e-05 - 725ms/epoch - 16ms/step
Epoch 295/500

Epoch 00295: val_loss did not improve from 0.00814
45/45 - 1s - loss: 0.0215 - val_loss: 0.0088 - lr: 1.0000e-05 - 700ms/epoch - 16ms/step
Epoch 296/500

Epoch 00296: val_loss did not improve from 0.00814
45/45 - 1s - loss: 0.0249 - val_loss: 0.0088 - lr: 1.0000e-05 - 717ms/epoch - 16ms/step
Epoch 297/500

Epoch 00297: val_loss did not improve from 0.00814
45/45 - 1s - loss: 0.0209 - val_loss: 0.0086 - lr: 1.0000e-05 - 694ms/epoch - 15ms/step
Epoch 298/500

Epoch 00298: val_loss did not improve from 0.00814
45/45 - 1s - loss: 0.0211 - val_loss: 0.0085 - lr: 1.0000e-05 - 677ms/epoch - 15ms/step
Epoch 299/500

Epoch 00299: val_loss did not improve from 0.00814
45/45 - 1s - loss: 0.0239 - val_loss: 0.0085 - lr: 1.0000e-05 - 701ms/epoch - 16ms/step
Epoch 300/500

Epoch 00300: val_loss did not improve from 0.00814
45/45 - 1s - loss: 0.0211 - val_loss: 0.0085 - lr: 1.0000e-05 - 707ms/epoch - 16ms/step
Epoch 301/500

Epoch 00301: val_loss did not improve from 0.00814
45/45 - 1s - loss: 0.0203 - val_loss: 0.0086 - lr: 1.0000e-05 - 740ms/epoch - 16ms/step
Epoch 302/500

Epoch 00302: val_loss did not improve from 0.00814
45/45 - 1s - loss: 0.0210 - val_loss: 0.0086 - lr: 1.0000e-05 - 736ms/epoch - 16ms/step
Epoch 303/500

Epoch 00303: val_loss did not improve from 0.00814
45/45 - 1s - loss: 0.0218 - val_loss: 0.0087 - lr: 1.0000e-05 - 728ms/epoch - 16ms/step
Epoch 304/500

Epoch 00304: val_loss did not improve from 0.00814
45/45 - 1s - loss: 0.0213 - val_loss: 0.0089 - lr: 1.0000e-05 - 743ms/epoch - 17ms/step
Epoch 305/500

Epoch 00305: val_loss did not improve from 0.00814
45/45 - 1s - loss: 0.0185 - val_loss: 0.0089 - lr: 1.0000e-05 - 711ms/epoch - 16ms/step
Epoch 306/500

Epoch 00306: val_loss did not improve from 0.00814
45/45 - 1s - loss: 0.0213 - val_loss: 0.0089 - lr: 1.0000e-05 - 763ms/epoch - 17ms/step
Epoch 307/500

Epoch 00307: val_loss did not improve from 0.00814
45/45 - 1s - loss: 0.0195 - val_loss: 0.0090 - lr: 1.0000e-05 - 709ms/epoch - 16ms/step
Epoch 308/500

Epoch 00308: val_loss did not improve from 0.00814
45/45 - 1s - loss: 0.0231 - val_loss: 0.0090 - lr: 1.0000e-05 - 721ms/epoch - 16ms/step
Epoch 309/500

Epoch 00309: val_loss did not improve from 0.00814
45/45 - 1s - loss: 0.0210 - val_loss: 0.0089 - lr: 1.0000e-05 - 769ms/epoch - 17ms/step
Epoch 310/500

Epoch 00310: val_loss did not improve from 0.00814
45/45 - 1s - loss: 0.0216 - val_loss: 0.0090 - lr: 1.0000e-05 - 749ms/epoch - 17ms/step
Epoch 311/500

Epoch 00311: val_loss did not improve from 0.00814
45/45 - 1s - loss: 0.0218 - val_loss: 0.0090 - lr: 1.0000e-05 - 701ms/epoch - 16ms/step
Epoch 00311: early stopping
SMA
Prediction vs Close:		52.61% Accuracy
Prediction vs Prediction:	46.27% Accuracy
MSE:	 216.26215770312788 
RMSE:	 14.705854538350632 
MAPE:	 11.92367463073216

EMA
Prediction vs Close:		54.1% Accuracy
Prediction vs Prediction:	46.27% Accuracy
MSE:	 122.96033735804009 
RMSE:	 11.08874823224155 
MAPE:	 9.251696034357076

WMA
Prediction vs Close:		53.73% Accuracy
Prediction vs Prediction:	46.64% Accuracy
MSE:	 56.80196950717294 
RMSE:	 7.5367081346681415 
MAPE:	 5.956122993340066

DEMA
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	42.16% Accuracy
MSE:	 120.51656357671082 
RMSE:	 10.978003624371365 
MAPE:	 9.343426819843298

KAMA
Prediction vs Close:		56.34% Accuracy
Prediction vs Prediction:	49.63% Accuracy
MSE:	 80.73205723036311 
RMSE:	 8.985101959931402 
MAPE:	 7.079003376879244
MIDPOINT
MIDPOINT([input_arrays], [timeperiod=14])

MidPoint over period (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 14
Outputs:
    real
14

Working on MIDPOINT predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.53 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4212.289, Time=0.04 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3747.746, Time=0.06 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.34 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3523.401, Time=0.10 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3387.759, Time=0.13 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=1.62 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=1.17 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3389.758, Time=0.28 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 4.274 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1689.879
Date:                Sun, 12 Dec 2021   AIC                           3387.759
Time:                        12:09:13   BIC                           3406.522
Sample:                             0   HQIC                          3394.964
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1878      0.003   -345.315      0.000      -1.195      -1.181
ar.L2         -0.8876      0.007   -121.809      0.000      -0.902      -0.873
ar.L3         -0.3957      0.007    -60.127      0.000      -0.409      -0.383
sigma2         3.8904      0.020    193.404      0.000       3.851       3.930
===================================================================================
Ljung-Box (L1) (Q):                  13.21   Jarque-Bera (JB):           1659080.01
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.08   Skew:                             3.28
Prob(H) (two-sided):                  0.00   Kurtosis:                       225.31
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

WARNING:tensorflow:Layer lstm_5 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.06587, saving model to LSTM1.h5
58/58 - 3s - loss: 0.1860 - val_loss: 0.0659 - lr: 0.0010 - 3s/epoch - 48ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.06587
58/58 - 1s - loss: 0.0798 - val_loss: 0.0807 - lr: 0.0010 - 910ms/epoch - 16ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.06587
58/58 - 1s - loss: 0.0675 - val_loss: 0.4197 - lr: 0.0010 - 876ms/epoch - 15ms/step
Epoch 4/500

Epoch 00004: val_loss improved from 0.06587 to 0.01530, saving model to LSTM1.h5
58/58 - 1s - loss: 0.0426 - val_loss: 0.0153 - lr: 0.0010 - 939ms/epoch - 16ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.01530
58/58 - 1s - loss: 0.0399 - val_loss: 0.2015 - lr: 0.0010 - 959ms/epoch - 17ms/step
Epoch 6/500

Epoch 00006: val_loss improved from 0.01530 to 0.00621, saving model to LSTM1.h5
58/58 - 1s - loss: 0.0366 - val_loss: 0.0062 - lr: 0.0010 - 933ms/epoch - 16ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.00621
58/58 - 1s - loss: 0.0393 - val_loss: 0.0567 - lr: 0.0010 - 885ms/epoch - 15ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.00621
58/58 - 1s - loss: 0.0373 - val_loss: 0.0068 - lr: 0.0010 - 911ms/epoch - 16ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.00621
58/58 - 1s - loss: 0.0338 - val_loss: 0.2309 - lr: 0.0010 - 870ms/epoch - 15ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.00621
58/58 - 1s - loss: 0.0301 - val_loss: 0.0215 - lr: 0.0010 - 926ms/epoch - 16ms/step
Epoch 11/500

Epoch 00011: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00011: val_loss did not improve from 0.00621
58/58 - 1s - loss: 0.0308 - val_loss: 0.0065 - lr: 0.0010 - 890ms/epoch - 15ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.00621
58/58 - 1s - loss: 0.0351 - val_loss: 0.0083 - lr: 1.0000e-04 - 866ms/epoch - 15ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.00621
58/58 - 1s - loss: 0.0251 - val_loss: 0.0091 - lr: 1.0000e-04 - 854ms/epoch - 15ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.00621
58/58 - 1s - loss: 0.0267 - val_loss: 0.0091 - lr: 1.0000e-04 - 899ms/epoch - 15ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.00621
58/58 - 1s - loss: 0.0248 - val_loss: 0.0081 - lr: 1.0000e-04 - 867ms/epoch - 15ms/step
Epoch 16/500

Epoch 00016: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00016: val_loss did not improve from 0.00621
58/58 - 1s - loss: 0.0263 - val_loss: 0.0078 - lr: 1.0000e-04 - 899ms/epoch - 15ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.00621
58/58 - 1s - loss: 0.0220 - val_loss: 0.0080 - lr: 1.0000e-05 - 923ms/epoch - 16ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.00621
58/58 - 1s - loss: 0.0214 - val_loss: 0.0079 - lr: 1.0000e-05 - 842ms/epoch - 15ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.00621
58/58 - 1s - loss: 0.0247 - val_loss: 0.0078 - lr: 1.0000e-05 - 850ms/epoch - 15ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.00621
58/58 - 1s - loss: 0.0233 - val_loss: 0.0077 - lr: 1.0000e-05 - 853ms/epoch - 15ms/step
Epoch 21/500

Epoch 00021: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00021: val_loss did not improve from 0.00621
58/58 - 1s - loss: 0.0254 - val_loss: 0.0079 - lr: 1.0000e-05 - 882ms/epoch - 15ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.00621
58/58 - 1s - loss: 0.0238 - val_loss: 0.0080 - lr: 1.0000e-05 - 882ms/epoch - 15ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.00621
58/58 - 1s - loss: 0.0243 - val_loss: 0.0079 - lr: 1.0000e-05 - 853ms/epoch - 15ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.00621
58/58 - 1s - loss: 0.0240 - val_loss: 0.0079 - lr: 1.0000e-05 - 892ms/epoch - 15ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.00621
58/58 - 1s - loss: 0.0228 - val_loss: 0.0077 - lr: 1.0000e-05 - 880ms/epoch - 15ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.00621
58/58 - 1s - loss: 0.0242 - val_loss: 0.0079 - lr: 1.0000e-05 - 844ms/epoch - 15ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.00621
58/58 - 1s - loss: 0.0238 - val_loss: 0.0081 - lr: 1.0000e-05 - 859ms/epoch - 15ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.00621
58/58 - 1s - loss: 0.0233 - val_loss: 0.0080 - lr: 1.0000e-05 - 883ms/epoch - 15ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.00621
58/58 - 1s - loss: 0.0213 - val_loss: 0.0080 - lr: 1.0000e-05 - 898ms/epoch - 15ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.00621
58/58 - 1s - loss: 0.0209 - val_loss: 0.0080 - lr: 1.0000e-05 - 887ms/epoch - 15ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.00621
58/58 - 1s - loss: 0.0219 - val_loss: 0.0081 - lr: 1.0000e-05 - 875ms/epoch - 15ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.00621
58/58 - 1s - loss: 0.0245 - val_loss: 0.0081 - lr: 1.0000e-05 - 877ms/epoch - 15ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.00621
58/58 - 1s - loss: 0.0219 - val_loss: 0.0080 - lr: 1.0000e-05 - 836ms/epoch - 14ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.00621
58/58 - 1s - loss: 0.0226 - val_loss: 0.0079 - lr: 1.0000e-05 - 869ms/epoch - 15ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.00621
58/58 - 1s - loss: 0.0225 - val_loss: 0.0079 - lr: 1.0000e-05 - 899ms/epoch - 15ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.00621
58/58 - 1s - loss: 0.0265 - val_loss: 0.0080 - lr: 1.0000e-05 - 887ms/epoch - 15ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.00621
58/58 - 1s - loss: 0.0210 - val_loss: 0.0080 - lr: 1.0000e-05 - 857ms/epoch - 15ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.00621
58/58 - 1s - loss: 0.0246 - val_loss: 0.0082 - lr: 1.0000e-05 - 863ms/epoch - 15ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.00621
58/58 - 1s - loss: 0.0216 - val_loss: 0.0083 - lr: 1.0000e-05 - 878ms/epoch - 15ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.00621
58/58 - 1s - loss: 0.0222 - val_loss: 0.0079 - lr: 1.0000e-05 - 844ms/epoch - 15ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.00621
58/58 - 1s - loss: 0.0237 - val_loss: 0.0078 - lr: 1.0000e-05 - 857ms/epoch - 15ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.00621
58/58 - 1s - loss: 0.0237 - val_loss: 0.0078 - lr: 1.0000e-05 - 868ms/epoch - 15ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.00621
58/58 - 1s - loss: 0.0247 - val_loss: 0.0076 - lr: 1.0000e-05 - 876ms/epoch - 15ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.00621
58/58 - 1s - loss: 0.0244 - val_loss: 0.0075 - lr: 1.0000e-05 - 897ms/epoch - 15ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.00621
58/58 - 1s - loss: 0.0243 - val_loss: 0.0077 - lr: 1.0000e-05 - 865ms/epoch - 15ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.00621
58/58 - 1s - loss: 0.0229 - val_loss: 0.0078 - lr: 1.0000e-05 - 916ms/epoch - 16ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.00621
58/58 - 1s - loss: 0.0207 - val_loss: 0.0076 - lr: 1.0000e-05 - 874ms/epoch - 15ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.00621
58/58 - 1s - loss: 0.0248 - val_loss: 0.0074 - lr: 1.0000e-05 - 878ms/epoch - 15ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.00621
58/58 - 1s - loss: 0.0226 - val_loss: 0.0073 - lr: 1.0000e-05 - 881ms/epoch - 15ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.00621
58/58 - 1s - loss: 0.0219 - val_loss: 0.0075 - lr: 1.0000e-05 - 897ms/epoch - 15ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.00621
58/58 - 1s - loss: 0.0221 - val_loss: 0.0071 - lr: 1.0000e-05 - 852ms/epoch - 15ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.00621
58/58 - 1s - loss: 0.0223 - val_loss: 0.0074 - lr: 1.0000e-05 - 866ms/epoch - 15ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.00621
58/58 - 1s - loss: 0.0221 - val_loss: 0.0077 - lr: 1.0000e-05 - 841ms/epoch - 15ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.00621
58/58 - 1s - loss: 0.0241 - val_loss: 0.0075 - lr: 1.0000e-05 - 847ms/epoch - 15ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.00621
58/58 - 1s - loss: 0.0235 - val_loss: 0.0075 - lr: 1.0000e-05 - 851ms/epoch - 15ms/step
Epoch 56/500

Epoch 00056: val_loss did not improve from 0.00621
58/58 - 1s - loss: 0.0200 - val_loss: 0.0075 - lr: 1.0000e-05 - 870ms/epoch - 15ms/step
Epoch 00056: early stopping
SMA
Prediction vs Close:		52.61% Accuracy
Prediction vs Prediction:	46.27% Accuracy
MSE:	 216.26215770312788 
RMSE:	 14.705854538350632 
MAPE:	 11.92367463073216

EMA
Prediction vs Close:		54.1% Accuracy
Prediction vs Prediction:	46.27% Accuracy
MSE:	 122.96033735804009 
RMSE:	 11.08874823224155 
MAPE:	 9.251696034357076

WMA
Prediction vs Close:		53.73% Accuracy
Prediction vs Prediction:	46.64% Accuracy
MSE:	 56.80196950717294 
RMSE:	 7.5367081346681415 
MAPE:	 5.956122993340066

DEMA
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	42.16% Accuracy
MSE:	 120.51656357671082 
RMSE:	 10.978003624371365 
MAPE:	 9.343426819843298

KAMA
Prediction vs Close:		56.34% Accuracy
Prediction vs Prediction:	49.63% Accuracy
MSE:	 80.73205723036311 
RMSE:	 8.985101959931402 
MAPE:	 7.079003376879244

MIDPOINT
Prediction vs Close:		53.73% Accuracy
Prediction vs Prediction:	45.9% Accuracy
MSE:	 75.80130571921515 
RMSE:	 8.70639453041356 
MAPE:	 7.130945881105426
T3
T3([input_arrays], [timeperiod=5], [vfactor=0.7])

Triple Exponential Moving Average (T3) (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 5
    vfactor: 0.7
Outputs:
    real
19

Working on T3 predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.55 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4414.515, Time=0.04 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3944.062, Time=0.06 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.52 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3715.173, Time=0.08 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3577.471, Time=0.11 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=1.88 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=0.80 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3579.471, Time=0.25 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 4.307 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1784.736
Date:                Sun, 12 Dec 2021   AIC                           3577.471
Time:                        12:11:36   BIC                           3596.235
Sample:                             0   HQIC                          3584.677
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1982      0.003   -389.844      0.000      -1.204      -1.192
ar.L2         -0.8974      0.006   -139.861      0.000      -0.910      -0.885
ar.L3         -0.3983      0.006    -68.862      0.000      -0.410      -0.387
sigma2         4.9242      0.023    215.469      0.000       4.879       4.969
===================================================================================
Ljung-Box (L1) (Q):                  14.55   Jarque-Bera (JB):           2468024.38
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                             3.90
Prob(H) (two-sided):                  0.00   Kurtosis:                       274.15
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

WARNING:tensorflow:Layer lstm_6 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.05236, saving model to LSTM1.h5
43/43 - 3s - loss: 0.1928 - val_loss: 0.0524 - lr: 0.0010 - 3s/epoch - 61ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.05236
43/43 - 1s - loss: 0.1446 - val_loss: 0.0525 - lr: 0.0010 - 704ms/epoch - 16ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.05236
43/43 - 1s - loss: 0.1028 - val_loss: 0.9645 - lr: 0.0010 - 634ms/epoch - 15ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.05236
43/43 - 1s - loss: 0.0612 - val_loss: 0.2236 - lr: 0.0010 - 657ms/epoch - 15ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.05236
43/43 - 1s - loss: 0.0534 - val_loss: 0.1234 - lr: 0.0010 - 688ms/epoch - 16ms/step
Epoch 6/500

Epoch 00006: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00006: val_loss did not improve from 0.05236
43/43 - 1s - loss: 0.0439 - val_loss: 0.0788 - lr: 0.0010 - 668ms/epoch - 16ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.05236
43/43 - 1s - loss: 0.0362 - val_loss: 0.0777 - lr: 1.0000e-04 - 694ms/epoch - 16ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.05236
43/43 - 1s - loss: 0.0359 - val_loss: 0.0834 - lr: 1.0000e-04 - 700ms/epoch - 16ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.05236
43/43 - 1s - loss: 0.0339 - val_loss: 0.0859 - lr: 1.0000e-04 - 650ms/epoch - 15ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.05236
43/43 - 1s - loss: 0.0323 - val_loss: 0.0797 - lr: 1.0000e-04 - 712ms/epoch - 17ms/step
Epoch 11/500

Epoch 00011: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00011: val_loss did not improve from 0.05236
43/43 - 1s - loss: 0.0311 - val_loss: 0.0836 - lr: 1.0000e-04 - 720ms/epoch - 17ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.05236
43/43 - 1s - loss: 0.0294 - val_loss: 0.0836 - lr: 1.0000e-05 - 706ms/epoch - 16ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.05236
43/43 - 1s - loss: 0.0328 - val_loss: 0.0842 - lr: 1.0000e-05 - 677ms/epoch - 16ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.05236
43/43 - 1s - loss: 0.0290 - val_loss: 0.0851 - lr: 1.0000e-05 - 701ms/epoch - 16ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.05236
43/43 - 1s - loss: 0.0294 - val_loss: 0.0855 - lr: 1.0000e-05 - 636ms/epoch - 15ms/step
Epoch 16/500

Epoch 00016: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00016: val_loss did not improve from 0.05236
43/43 - 1s - loss: 0.0311 - val_loss: 0.0847 - lr: 1.0000e-05 - 659ms/epoch - 15ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.05236
43/43 - 1s - loss: 0.0303 - val_loss: 0.0844 - lr: 1.0000e-05 - 705ms/epoch - 16ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.05236
43/43 - 1s - loss: 0.0310 - val_loss: 0.0839 - lr: 1.0000e-05 - 711ms/epoch - 17ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.05236
43/43 - 1s - loss: 0.0346 - val_loss: 0.0832 - lr: 1.0000e-05 - 700ms/epoch - 16ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.05236
43/43 - 1s - loss: 0.0296 - val_loss: 0.0837 - lr: 1.0000e-05 - 720ms/epoch - 17ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.05236
43/43 - 1s - loss: 0.0314 - val_loss: 0.0832 - lr: 1.0000e-05 - 676ms/epoch - 16ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.05236
43/43 - 1s - loss: 0.0309 - val_loss: 0.0828 - lr: 1.0000e-05 - 657ms/epoch - 15ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.05236
43/43 - 1s - loss: 0.0310 - val_loss: 0.0832 - lr: 1.0000e-05 - 684ms/epoch - 16ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.05236
43/43 - 1s - loss: 0.0297 - val_loss: 0.0835 - lr: 1.0000e-05 - 685ms/epoch - 16ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.05236
43/43 - 1s - loss: 0.0330 - val_loss: 0.0843 - lr: 1.0000e-05 - 600ms/epoch - 14ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.05236
43/43 - 1s - loss: 0.0317 - val_loss: 0.0855 - lr: 1.0000e-05 - 654ms/epoch - 15ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.05236
43/43 - 1s - loss: 0.0303 - val_loss: 0.0864 - lr: 1.0000e-05 - 634ms/epoch - 15ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.05236
43/43 - 1s - loss: 0.0290 - val_loss: 0.0858 - lr: 1.0000e-05 - 618ms/epoch - 14ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.05236
43/43 - 1s - loss: 0.0299 - val_loss: 0.0857 - lr: 1.0000e-05 - 661ms/epoch - 15ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.05236
43/43 - 1s - loss: 0.0284 - val_loss: 0.0850 - lr: 1.0000e-05 - 644ms/epoch - 15ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.05236
43/43 - 1s - loss: 0.0326 - val_loss: 0.0849 - lr: 1.0000e-05 - 662ms/epoch - 15ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.05236
43/43 - 1s - loss: 0.0331 - val_loss: 0.0842 - lr: 1.0000e-05 - 631ms/epoch - 15ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.05236
43/43 - 1s - loss: 0.0312 - val_loss: 0.0845 - lr: 1.0000e-05 - 680ms/epoch - 16ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.05236
43/43 - 1s - loss: 0.0287 - val_loss: 0.0853 - lr: 1.0000e-05 - 671ms/epoch - 16ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.05236
43/43 - 1s - loss: 0.0303 - val_loss: 0.0844 - lr: 1.0000e-05 - 671ms/epoch - 16ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.05236
43/43 - 1s - loss: 0.0276 - val_loss: 0.0851 - lr: 1.0000e-05 - 698ms/epoch - 16ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.05236
43/43 - 1s - loss: 0.0283 - val_loss: 0.0854 - lr: 1.0000e-05 - 661ms/epoch - 15ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.05236
43/43 - 1s - loss: 0.0301 - val_loss: 0.0857 - lr: 1.0000e-05 - 637ms/epoch - 15ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.05236
43/43 - 1s - loss: 0.0284 - val_loss: 0.0851 - lr: 1.0000e-05 - 658ms/epoch - 15ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.05236
43/43 - 1s - loss: 0.0290 - val_loss: 0.0847 - lr: 1.0000e-05 - 651ms/epoch - 15ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.05236
43/43 - 1s - loss: 0.0301 - val_loss: 0.0841 - lr: 1.0000e-05 - 692ms/epoch - 16ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.05236
43/43 - 1s - loss: 0.0274 - val_loss: 0.0851 - lr: 1.0000e-05 - 670ms/epoch - 16ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.05236
43/43 - 1s - loss: 0.0300 - val_loss: 0.0844 - lr: 1.0000e-05 - 631ms/epoch - 15ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.05236
43/43 - 1s - loss: 0.0284 - val_loss: 0.0844 - lr: 1.0000e-05 - 691ms/epoch - 16ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.05236
43/43 - 1s - loss: 0.0291 - val_loss: 0.0832 - lr: 1.0000e-05 - 665ms/epoch - 15ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.05236
43/43 - 1s - loss: 0.0273 - val_loss: 0.0833 - lr: 1.0000e-05 - 687ms/epoch - 16ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.05236
43/43 - 1s - loss: 0.0290 - val_loss: 0.0836 - lr: 1.0000e-05 - 646ms/epoch - 15ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.05236
43/43 - 1s - loss: 0.0290 - val_loss: 0.0844 - lr: 1.0000e-05 - 684ms/epoch - 16ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.05236
43/43 - 1s - loss: 0.0317 - val_loss: 0.0849 - lr: 1.0000e-05 - 689ms/epoch - 16ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.05236
43/43 - 1s - loss: 0.0267 - val_loss: 0.0838 - lr: 1.0000e-05 - 655ms/epoch - 15ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.05236
43/43 - 1s - loss: 0.0291 - val_loss: 0.0829 - lr: 1.0000e-05 - 639ms/epoch - 15ms/step
Epoch 00051: early stopping
SMA
Prediction vs Close:		52.61% Accuracy
Prediction vs Prediction:	46.27% Accuracy
MSE:	 216.26215770312788 
RMSE:	 14.705854538350632 
MAPE:	 11.92367463073216

EMA
Prediction vs Close:		54.1% Accuracy
Prediction vs Prediction:	46.27% Accuracy
MSE:	 122.96033735804009 
RMSE:	 11.08874823224155 
MAPE:	 9.251696034357076

WMA
Prediction vs Close:		53.73% Accuracy
Prediction vs Prediction:	46.64% Accuracy
MSE:	 56.80196950717294 
RMSE:	 7.5367081346681415 
MAPE:	 5.956122993340066

DEMA
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	42.16% Accuracy
MSE:	 120.51656357671082 
RMSE:	 10.978003624371365 
MAPE:	 9.343426819843298

KAMA
Prediction vs Close:		56.34% Accuracy
Prediction vs Prediction:	49.63% Accuracy
MSE:	 80.73205723036311 
RMSE:	 8.985101959931402 
MAPE:	 7.079003376879244

MIDPOINT
Prediction vs Close:		53.73% Accuracy
Prediction vs Prediction:	45.9% Accuracy
MSE:	 75.80130571921515 
RMSE:	 8.70639453041356 
MAPE:	 7.130945881105426

T3
Prediction vs Close:		52.24% Accuracy
Prediction vs Prediction:	46.64% Accuracy
MSE:	 46.36148283886003 
RMSE:	 6.808926702415001 
MAPE:	 5.596706034896943
TEMA
TEMA([input_arrays], [timeperiod=30])

Triple Exponential Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
9

Working on TEMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.64 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4352.703, Time=0.04 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3889.412, Time=0.06 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.38 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3689.930, Time=0.08 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3574.245, Time=0.12 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=1.56 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=1.04 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3576.245, Time=0.27 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 4.208 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1783.123
Date:                Sun, 12 Dec 2021   AIC                           3574.245
Time:                        12:13:37   BIC                           3593.008
Sample:                             0   HQIC                          3581.451
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1480      0.004   -302.430      0.000      -1.155      -1.141
ar.L2         -0.8300      0.008    -99.682      0.000      -0.846      -0.814
ar.L3         -0.3687      0.007    -50.527      0.000      -0.383      -0.354
sigma2         4.9055      0.028    175.970      0.000       4.851       4.960
===================================================================================
Ljung-Box (L1) (Q):                  11.61   Jarque-Bera (JB):           1261976.58
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.16   Skew:                             2.52
Prob(H) (two-sided):                  0.00   Kurtosis:                       196.90
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

WARNING:tensorflow:Layer lstm_7 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.01596, saving model to LSTM1.h5
90/90 - 3s - loss: 0.2644 - val_loss: 0.0160 - lr: 0.0010 - 3s/epoch - 36ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.01596
90/90 - 1s - loss: 0.0703 - val_loss: 0.0610 - lr: 0.0010 - 1s/epoch - 15ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.01596
90/90 - 1s - loss: 0.1296 - val_loss: 0.0363 - lr: 0.0010 - 1s/epoch - 15ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.01596
90/90 - 1s - loss: 0.0808 - val_loss: 1.3399 - lr: 0.0010 - 1s/epoch - 15ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.01596
90/90 - 1s - loss: 0.0449 - val_loss: 0.2443 - lr: 0.0010 - 1s/epoch - 15ms/step
Epoch 6/500

Epoch 00006: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00006: val_loss did not improve from 0.01596
90/90 - 1s - loss: 0.0420 - val_loss: 0.0862 - lr: 0.0010 - 1s/epoch - 15ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.01596
90/90 - 1s - loss: 0.0426 - val_loss: 0.1005 - lr: 1.0000e-04 - 1s/epoch - 15ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.01596
90/90 - 1s - loss: 0.0357 - val_loss: 0.0937 - lr: 1.0000e-04 - 1s/epoch - 14ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.01596
90/90 - 1s - loss: 0.0392 - val_loss: 0.0879 - lr: 1.0000e-04 - 1s/epoch - 15ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.01596
90/90 - 1s - loss: 0.0381 - val_loss: 0.0818 - lr: 1.0000e-04 - 1s/epoch - 15ms/step
Epoch 11/500

Epoch 00011: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00011: val_loss did not improve from 0.01596
90/90 - 1s - loss: 0.0355 - val_loss: 0.0753 - lr: 1.0000e-04 - 1s/epoch - 14ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.01596
90/90 - 1s - loss: 0.0306 - val_loss: 0.0757 - lr: 1.0000e-05 - 1s/epoch - 14ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.01596
90/90 - 1s - loss: 0.0336 - val_loss: 0.0761 - lr: 1.0000e-05 - 1s/epoch - 15ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.01596
90/90 - 1s - loss: 0.0312 - val_loss: 0.0764 - lr: 1.0000e-05 - 1s/epoch - 15ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.01596
90/90 - 1s - loss: 0.0353 - val_loss: 0.0757 - lr: 1.0000e-05 - 1s/epoch - 16ms/step
Epoch 16/500

Epoch 00016: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00016: val_loss did not improve from 0.01596
90/90 - 1s - loss: 0.0306 - val_loss: 0.0754 - lr: 1.0000e-05 - 1s/epoch - 15ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.01596
90/90 - 1s - loss: 0.0323 - val_loss: 0.0756 - lr: 1.0000e-05 - 1s/epoch - 14ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.01596
90/90 - 1s - loss: 0.0324 - val_loss: 0.0766 - lr: 1.0000e-05 - 1s/epoch - 15ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.01596
90/90 - 1s - loss: 0.0339 - val_loss: 0.0777 - lr: 1.0000e-05 - 1s/epoch - 16ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.01596
90/90 - 1s - loss: 0.0324 - val_loss: 0.0783 - lr: 1.0000e-05 - 1s/epoch - 15ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.01596
90/90 - 1s - loss: 0.0295 - val_loss: 0.0782 - lr: 1.0000e-05 - 1s/epoch - 15ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.01596
90/90 - 1s - loss: 0.0327 - val_loss: 0.0788 - lr: 1.0000e-05 - 1s/epoch - 15ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.01596
90/90 - 1s - loss: 0.0320 - val_loss: 0.0787 - lr: 1.0000e-05 - 1s/epoch - 15ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.01596
90/90 - 1s - loss: 0.0281 - val_loss: 0.0780 - lr: 1.0000e-05 - 1s/epoch - 15ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.01596
90/90 - 1s - loss: 0.0332 - val_loss: 0.0776 - lr: 1.0000e-05 - 1s/epoch - 15ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.01596
90/90 - 1s - loss: 0.0322 - val_loss: 0.0770 - lr: 1.0000e-05 - 1s/epoch - 15ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.01596
90/90 - 1s - loss: 0.0315 - val_loss: 0.0759 - lr: 1.0000e-05 - 1s/epoch - 14ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.01596
90/90 - 1s - loss: 0.0286 - val_loss: 0.0763 - lr: 1.0000e-05 - 1s/epoch - 15ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.01596
90/90 - 1s - loss: 0.0295 - val_loss: 0.0767 - lr: 1.0000e-05 - 1s/epoch - 16ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.01596
90/90 - 1s - loss: 0.0312 - val_loss: 0.0764 - lr: 1.0000e-05 - 1s/epoch - 15ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.01596
90/90 - 1s - loss: 0.0332 - val_loss: 0.0753 - lr: 1.0000e-05 - 1s/epoch - 15ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.01596
90/90 - 1s - loss: 0.0289 - val_loss: 0.0756 - lr: 1.0000e-05 - 1s/epoch - 15ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.01596
90/90 - 1s - loss: 0.0312 - val_loss: 0.0762 - lr: 1.0000e-05 - 1s/epoch - 15ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.01596
90/90 - 1s - loss: 0.0313 - val_loss: 0.0745 - lr: 1.0000e-05 - 1s/epoch - 15ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.01596
90/90 - 1s - loss: 0.0345 - val_loss: 0.0749 - lr: 1.0000e-05 - 1s/epoch - 15ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.01596
90/90 - 1s - loss: 0.0318 - val_loss: 0.0762 - lr: 1.0000e-05 - 1s/epoch - 14ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.01596
90/90 - 1s - loss: 0.0302 - val_loss: 0.0756 - lr: 1.0000e-05 - 1s/epoch - 16ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.01596
90/90 - 1s - loss: 0.0289 - val_loss: 0.0750 - lr: 1.0000e-05 - 1s/epoch - 16ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.01596
90/90 - 1s - loss: 0.0309 - val_loss: 0.0755 - lr: 1.0000e-05 - 1s/epoch - 15ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.01596
90/90 - 1s - loss: 0.0309 - val_loss: 0.0754 - lr: 1.0000e-05 - 1s/epoch - 15ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.01596
90/90 - 1s - loss: 0.0302 - val_loss: 0.0753 - lr: 1.0000e-05 - 1s/epoch - 14ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.01596
90/90 - 1s - loss: 0.0326 - val_loss: 0.0736 - lr: 1.0000e-05 - 1s/epoch - 14ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.01596
90/90 - 1s - loss: 0.0318 - val_loss: 0.0728 - lr: 1.0000e-05 - 1s/epoch - 15ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.01596
90/90 - 1s - loss: 0.0282 - val_loss: 0.0731 - lr: 1.0000e-05 - 1s/epoch - 15ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.01596
90/90 - 1s - loss: 0.0314 - val_loss: 0.0749 - lr: 1.0000e-05 - 1s/epoch - 15ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.01596
90/90 - 1s - loss: 0.0289 - val_loss: 0.0755 - lr: 1.0000e-05 - 1s/epoch - 15ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.01596
90/90 - 1s - loss: 0.0304 - val_loss: 0.0760 - lr: 1.0000e-05 - 1s/epoch - 15ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.01596
90/90 - 1s - loss: 0.0280 - val_loss: 0.0754 - lr: 1.0000e-05 - 1s/epoch - 16ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.01596
90/90 - 1s - loss: 0.0281 - val_loss: 0.0756 - lr: 1.0000e-05 - 1s/epoch - 15ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.01596
90/90 - 1s - loss: 0.0303 - val_loss: 0.0775 - lr: 1.0000e-05 - 1s/epoch - 16ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.01596
90/90 - 1s - loss: 0.0293 - val_loss: 0.0771 - lr: 1.0000e-05 - 1s/epoch - 15ms/step
Epoch 00051: early stopping
SMA
Prediction vs Close:		52.61% Accuracy
Prediction vs Prediction:	46.27% Accuracy
MSE:	 216.26215770312788 
RMSE:	 14.705854538350632 
MAPE:	 11.92367463073216

EMA
Prediction vs Close:		54.1% Accuracy
Prediction vs Prediction:	46.27% Accuracy
MSE:	 122.96033735804009 
RMSE:	 11.08874823224155 
MAPE:	 9.251696034357076

WMA
Prediction vs Close:		53.73% Accuracy
Prediction vs Prediction:	46.64% Accuracy
MSE:	 56.80196950717294 
RMSE:	 7.5367081346681415 
MAPE:	 5.956122993340066

DEMA
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	42.16% Accuracy
MSE:	 120.51656357671082 
RMSE:	 10.978003624371365 
MAPE:	 9.343426819843298

KAMA
Prediction vs Close:		56.34% Accuracy
Prediction vs Prediction:	49.63% Accuracy
MSE:	 80.73205723036311 
RMSE:	 8.985101959931402 
MAPE:	 7.079003376879244

MIDPOINT
Prediction vs Close:		53.73% Accuracy
Prediction vs Prediction:	45.9% Accuracy
MSE:	 75.80130571921515 
RMSE:	 8.70639453041356 
MAPE:	 7.130945881105426

T3
Prediction vs Close:		52.24% Accuracy
Prediction vs Prediction:	46.64% Accuracy
MSE:	 46.36148283886003 
RMSE:	 6.808926702415001 
MAPE:	 5.596706034896943

TEMA
Prediction vs Close:		51.87% Accuracy
Prediction vs Prediction:	48.51% Accuracy
MSE:	 32.02006831549358 
RMSE:	 5.658627776722337 
MAPE:	 4.840005256997922
Runtime: mins: 20.992133364683333

Architecture used

In [ ]:
from google.colab import files
import cv2
uploaded = files.upload()
Upload widget is only available when the cell has been executed in the current browser session. Please rerun this cell to enable.
Saving Experiment1.png to Experiment1 (2).png
In [ ]:
img = cv2.imread('Experiment1.png')
plt.figure(figsize=(20,10))
plt.axis("off")
plt.title('LSTM Architecture '+imgfile,fontsize=18)
plt.imshow(img)
Out[ ]:
<matplotlib.image.AxesImage at 0x7fdf309b3090>

Excess kurtosis is a metric that compares the kurtosis of a distribution against the kurtosis of a normal distribution. The kurtosis of a normal distribution equals 3. Therefore, the excess kurtosis is found using the formula below:

Excess Kurtosis = Kurtosis – 3

Model Plots

In [ ]:
np.save("X_train_appl.npy", X_train)
np.save("y_train_appl.npy", y_train)
np.save("X_test_appl.npy", X_test)
np.save("y_test_appl.npy", y_test)
np.save("yc_train_appl.npy", yc_train)
np.save("yc_test_appl.npy", yc_test)
np.save('index_train_appl.npy', index_train)
np.save('index_test_appl.npy', index_test)
In [ ]:
list(simulation1.keys())
Out[ ]:
['SMA', 'EMA', 'WMA', 'DEMA', 'KAMA', 'MIDPOINT', 'T3', 'TEMA']
In [71]:
with open('simulation1_data.json') as json_file:
    simulation1 = json.load(json_file)
fileimg = 'Experiment1'
In [72]:
for i in range(len(list(simulation1.keys()))):
  SIM = list(simulation1.keys())[i]
  plot_train(simulation1,SIM)
  plot_test(simulation1,SIM)
----- Train RMSE for SMA ----- 7.922367251770581
----- Train_MSE_LSTM for SMA ----- 62.763902871926945
----- Train MAE LSTM for SMA ----- 6.766703454283121
----- Test RMSE for SMA----- 14.705854538350632
----- Test_MSE_LSTM for SMA----- 216.26215770312788
----- Test_MAE_LSTM for SMA----- 11.92367463073216
----- Train RMSE for EMA ----- 9.326837823498344
----- Train_MSE_LSTM for EMA ----- 86.98990378583935
----- Train MAE LSTM for EMA ----- 8.227210225627324
----- Test RMSE for EMA----- 11.08874823224155
----- Test_MSE_LSTM for EMA----- 122.96033735804009
----- Test_MAE_LSTM for EMA----- 9.251696034357076
----- Train RMSE for WMA ----- 9.567750607562207
----- Train_MSE_LSTM for WMA ----- 91.54185168850697
----- Train MAE LSTM for WMA ----- 8.483814814644358
----- Test RMSE for WMA----- 7.5367081346681415
----- Test_MSE_LSTM for WMA----- 56.80196950717294
----- Test_MAE_LSTM for WMA----- 5.956122993340066
----- Train RMSE for DEMA ----- 11.260564301359528
----- Train_MSE_LSTM for DEMA ----- 126.80030838505259
----- Train MAE LSTM for DEMA ----- 10.029241457055923
----- Test RMSE for DEMA----- 10.978003624371365
----- Test_MSE_LSTM for DEMA----- 120.51656357671082
----- Test_MAE_LSTM for DEMA----- 9.343426819843298
----- Train RMSE for KAMA ----- 9.728924164194906
----- Train_MSE_LSTM for KAMA ----- 94.65196539265554
----- Train MAE LSTM for KAMA ----- 8.726636092269294
----- Test RMSE for KAMA----- 8.985101959931402
----- Test_MSE_LSTM for KAMA----- 80.73205723036311
----- Test_MAE_LSTM for KAMA----- 7.079003376879244
----- Train RMSE for MIDPOINT ----- 8.489283494179926
----- Train_MSE_LSTM for MIDPOINT ----- 72.06793424455573
----- Train MAE LSTM for MIDPOINT ----- 7.510015655912938
----- Test RMSE for MIDPOINT----- 8.70639453041356
----- Test_MSE_LSTM for MIDPOINT----- 75.80130571921515
----- Test_MAE_LSTM for MIDPOINT----- 7.130945881105426
----- Train RMSE for T3 ----- 10.915128509512805
----- Train_MSE_LSTM for T3 ----- 119.14003037917921
----- Train MAE LSTM for T3 ----- 9.760355664290358
----- Test RMSE for T3----- 6.808926702415001
----- Test_MSE_LSTM for T3----- 46.36148283886003
----- Test_MAE_LSTM for T3----- 5.596706034896943
----- Train RMSE for TEMA ----- 6.723556059985799
----- Train_MSE_LSTM for TEMA ----- 45.20620609177176
----- Train MAE LSTM for TEMA ----- 4.40413747354074
----- Test RMSE for TEMA----- 5.658627776722337
----- Test_MSE_LSTM for TEMA----- 32.02006831549358
----- Test_MAE_LSTM for TEMA----- 4.840005256997922

Univariate Arima Multistep MutiVariate LSTM Hybrid Model Experiment 2

In [ ]:
def get_lstm(data,original_data, train_len, test_len,img_file,ma ,lstm_len=3):
    # prepare train and test data
    X_value = pd.DataFrame(data.iloc[:, :])
    y_value = pd.DataFrame(data.iloc[:, 3])
    X_scaler = MinMaxScaler(feature_range=(-1, 1))
    y_scaler = MinMaxScaler(feature_range=(-1, 1))
    X_scaler.fit(X_value)
    y_scaler.fit(y_value)
    # Get data and check shape
    X, y, yc = get_X_y(X_scale_dataset, y_scale_dataset)#X will be of shape 224 X 3 X 21 (each 3 X 21 array will be 3 days' worth of data). yc will have the corresponding closing price value
    # pdb.set_trace()
    X_train, X_test, = split_train_test(X)
    y_train, y_test, = split_train_test(y)
    # yc_train, yc_test, = split_train_test(original_data)
    index_train, index_test, = predict_index(dataset_final, X_train, n_steps_in, n_steps_out)
    det = 20
    input_dim = X_train.shape[1]#3
    feature_size = X_train.shape[2]#24
    output_dim = y_train.shape[1]#1



    # # Option 1
    # # Set up & fit LSTM RNN
    # model = Sequential()
    # model.add(LSTM(256, activation='relu', kernel_initializer='he_normal', input_shape=(input_dim, feature_size)))
    # model.add(Dense(units=64,activation='relu'))
    # model.add(Dropout(0.5))
    # model.add(Dense(units=output_dim))
    # model.compile(optimizer=Adam(learning_rate = 0.001), loss='mse')

    # ## Common code
    # callbacks = [
    # EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    # ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    # ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    # fname1 = img_file+'.png'
    # tensorflow.keras.utils.plot_model(
    #     model, to_file=fname1, show_shapes=True, show_dtype=False,
    #     show_layer_names=True, expand_nested=False, dpi=96,
    #     layer_range=None, show_layer_activations=False
    # )
    # history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # # plot loss
    # fname2 = img_file+'-'+ma
    # plt.title(img_file+'-'+ma_' Loss')
    # plt.xlabel("Epochs")
    # plt.ylabel("Loss")
    # pyplot.plot(history.history['loss'], label='train')
    # pyplot.plot(history.history['val_loss'], label='validation')
    # pyplot.legend()
    # pyplot.savefig(fname2+'.png',dpi='figure')
    # pyplot.show()


    # option 2
    model = Sequential()
    model.add(Bidirectional(LSTM(units= 128), input_shape=(input_dim, feature_size)))
    model.add(Dense(64))
    model.add(Dense(units=output_dim))
    model.compile(optimizer=Adam(learning_rate = 0.001), loss='mean_squared_error', metrics=['accuracy'])
    # Common code
    callbacks = [
    EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    ModelCheckpoint('LSTM2.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    fname1 = img_file+'.png'
    tensorflow.keras.utils.plot_model(
        model, to_file=fname1, show_shapes=True, show_dtype=False,
        show_layer_names=True, expand_nested=False, dpi=96,
        layer_range=None, show_layer_activations=False
    )
    history = model.fit(X_train, y_train, epochs=500, batch_size=10, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # plot loss
    fname2 = img_file+'-'+ma
    plt.title(img_file+'-'+ma+' Loss')
    plt.xlabel("Epochs")
    plt.ylabel("Loss")
    pyplot.plot(history.history['loss'], label='train')
    pyplot.plot(history.history['val_loss'], label='validation')
    pyplot.legend()
    pyplot.savefig(fname2+'.png',dpi='figure')
    pyplot.show()




    # Option 3
    # define custom activation
    # 
    # class Double_Tanh(Activation):
    #     def __init__(self, activation, **kwargs):
    #         super(Double_Tanh, self).__init__(activation, **kwargs)
    #         self.__name__ = 'double_tanh'

    # def double_tanh(x):
    #     return (K.tanh(x) * 2)

    # get_custom_objects().update({'double_tanh':Double_Tanh(double_tanh)})
    #     # Model Generation
    # model = Sequential()
    # #check https://machinelearningmastery.com/use-weight-regularization-lstm-networks-time-series-forecasting/
    # model.add(LSTM(25, input_shape=(input_dim, feature_size), dropout=0.2, kernel_regularizer=l1_l2(0.00,0.00), bias_regularizer=l1_l2(0.00,0.00)))
    # model.add(Dense(1))
    # model.add(Activation(double_tanh))
    # model.compile(loss='mean_squared_error', optimizer='adam', metrics=['mse', 'mae'])
    # # Common code
    # callbacks = [
    # EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    # ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    # ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    # fname1 = img_file+'.png'
    # tensorflow.keras.utils.plot_model(
    #     model, to_file=fname1, show_shapes=True, show_dtype=False,
    #     show_layer_names=True, expand_nested=False, dpi=96,
    #     layer_range=None, show_layer_activations=False
    # )
    # history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # # plot loss
    # fname2 = img_file+'-'+ma
    # plt.title(img_file+'-'+ma_' Loss')
    # plt.xlabel("Epochs")
    # plt.ylabel("Loss")
    # pyplot.plot(history.history['loss'], label='train')
    # pyplot.plot(history.history['val_loss'], label='validation')
    # pyplot.legend()
    # pyplot.savefig(fname2+'.png',dpi='figure')
    # pyplot.show()

    # Option 4
    # Set up & fit LSTM RNN
    # model = Sequential()
    # model.add(LSTM(units=lstm_len, return_sequences=True, input_shape=(x_train.shape[1], 1)))
    # model.add(LSTM(units=int(lstm_len/2)))
    # model.add(Dense(1, activation='sigmoid'))
    # model.compile(loss='mean_squared_error', optimizer='adam')
    # # Common code
    # callbacks = [
    # EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    # ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    # ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    # fname1 = img_file+'.png'
    # tensorflow.keras.utils.plot_model(
    #     model, to_file=fname1, show_shapes=True, show_dtype=False,
    #     show_layer_names=True, expand_nested=False, dpi=96,
    #     layer_range=None, show_layer_activations=False
    # )
    # history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # # plot loss
    # fname2 = img_file+'-'+ma
    # plt.title(img_file+'-'+ma_' Loss')
    # plt.xlabel("Epochs")
    # plt.ylabel("Loss")
    # pyplot.plot(history.history['loss'], label='train')
    # pyplot.plot(history.history['val_loss'], label='validation')
    # pyplot.legend()
    # pyplot.savefig(fname2+'.png',dpi='figure')
    # pyplot.show()



    # Generate predictions
    predictiontr = model.predict(X_train, verbose=0)
    predictiontr = y_scaler.inverse_transform(predictiontr).tolist()
    outputtr = []
    for i in range(len(predictiontr)):
        outputtr.extend(predictiontr[i])
    predictiontr = outputtr
    # Generate error data

    ## replace with yc , xtest generated by new multistep method
    mse_tr = mean_squared_error(y_train, predictiontr)
    rmse_tr = mse_tr ** 0.5
    # mape = mean_absolute_percentage_error(X_test, pd.Series(predictiontr))
    mae_tr = mean_absolute_error(y_train, pd.Series(predictiontr))
    # Original_tr = pd.Series(yc_train)
    Original_tr = y_scaler.inverse_transform(y_train).flatten().tolist()


    predictionte = model.predict(X_test, verbose=0)
    predictionte = (y_scaler.inverse_transform(predictionte)-det).tolist()
    outputte = []
    for i in range(len(predictionte)):
        outputte.extend(predictionte[i])
    predictionte = outputte
    # Generate error data

    mse_te = mean_squared_error(y_test, predictionte)
    rmse_te = mse_te ** 0.5
    # mape = mean_absolute_percentage_error(X_test, pd.Series(predictionte))
    mae_te = mean_absolute_error(y_test, pd.Series(predictionte))
    # Original_te = pd.Series(yc_test)
    Original_te = y_scaler.inverse_transform(y_test).flatten().tolist()

    return Original_tr, predictiontr, mse_tr, rmse_tr,mae_tr,Original_te,predictionte, mse_te,rmse_te,mae_te
In [ ]:
if __name__ == '__main__':
    start_time = timeit.default_timer()
    simulation2 = {}
    imgfile = 'Experiment2'
    for ma in optimized_period:
              print(ma)
              print(functions[ma])
              print ( int( optimized_period[ma]))
            # if ma == 'SMA':
              low_vol = df.apply(lambda c:  functions[ma](c, timeperiod = int( optimized_period[ma])))
              low_vol = low_vol.fillna(0)
              low_vol_data = df['close']
              high_vol = pd.DataFrame()
              df2 = df.copy()
              for i in df2.columns:
                if i in low_vol.columns:
                  high_vol[i] = df2[i].subtract(low_vol[i], fill_value=0)
              high_vol_data = df['close']
              ## *****************************************************
              # Generate ARIMA and LSTM predictions
              print('\nWorking on ' + ma + ' predictions')
              try:
                print('parameters used : ', train_len, test_len)
                low_vol_Original, low_vol_prediction, low_vol_mse, low_vol_rmse,low_vol_mae = get_arima(low_vol,low_vol_data, train_len, test_len)
              except:
                  print('ARIMA error, skipping to next MA type')
                  continue
              Original_tr, predictiontr, mse_tr, rmse_tr,mae_tr,high_vol_Original, high_vol_prediction, high_vol_mse, high_vol_rmse,high_vol_mae, = get_lstm(high_vol,high_vol_data, train_len, test_len,imgfile,ma)
              final_prediction_tr = df['close'].head(train_len).values + pd.Series(predictiontr) # ignoring first 3 steps 
              mse_ftr = mean_squared_error(df['close'].head(train_len).values,final_prediction_tr.values)
              rmse_ftr = mse_ftr ** 0.5
              mape_ftr = mean_absolute_percentage_error(df['close'].head(train_len).reset_index(drop=True), final_prediction_tr)
              mae_ftr = mean_absolute_error(df['close'].head(train_len).reset_index(drop=True), final_prediction_tr)

              final_prediction = pd.Series(low_vol_prediction[3:]) + pd.Series(high_vol_prediction)
              mse = mean_squared_error(df['close'].tail(test_len).values,final_prediction.values)
              rmse = mse ** 0.5
              mape = mean_absolute_percentage_error(df['close'].tail(test_len).reset_index(drop=True), final_prediction)
              mae = mean_absolute_error(df['close'].tail(test_len).reset_index(drop=True), final_prediction)
              # Generate prediction accuracy
              actual = df['close'].tail(test_len).values
              result_1 = []
              result_2 = []
              for i in range(1, len(final_prediction)):
                  # Compare prediction to previous close price
                  if final_prediction[i] > actual[i-1] and actual[i] > actual[i-1]:
                      result_1.append(1)
                  elif final_prediction[i] < actual[i-1] and actual[i] < actual[i-1]:
                      result_1.append(1)
                  else:
                      result_1.append(0)

                  # Compare prediction to previous prediction
                  if final_prediction[i] > final_prediction[i-1] and actual[i] > actual[i-1]:
                      result_2.append(1)
                  elif final_prediction[i] < final_prediction[i-1] and actual[i] < actual[i-1]:
                      result_2.append(1)
                  else:
                      result_2.append(0)

              accuracy_1 = np.mean(result_1)
              accuracy_2 = np.mean(result_2)

              simulation2[ma] = {'low_vol': {'original':list(low_vol_Original), 'prediction': list(low_vol_prediction) , 'mse': low_vol_mse,
                                            'rmse': low_vol_rmse, 'mae' : low_vol_mae},
                                'high_vol': {'original':list(high_vol_Original),'prediction': list(high_vol_prediction), 'mse': high_vol_mse,
                                            'rmse': high_vol_rmse, 'mae' : high_vol_mae},
                                'final_tr': {'original':df['close'].head(train_len).tolist(),'prediction': final_prediction_tr.values.tolist(), 'mse': mse_ftr,
                                            'rmse': rmse_ftr, 'mae' : mae_ftr},
                                'final': {'original': df['close'].tail(test_len).tolist(), 'prediction': final_prediction.values.tolist(), 'mse': mse,
                                          'rmse': rmse, 'mae': mae },
                                'accuracy': {'prediction vs close': accuracy_1, 'prediction vs prediction': accuracy_2}}

              # save simulation data here as checkpoint
              with open('simulation2_data.json', 'w') as fp:
                  json.dump(simulation2, fp)

              for ma in simulation2.keys():
                  print('\n' + ma)
                  print('Prediction vs Close:\t\t' + str(round(100*simulation2[ma]['accuracy']['prediction vs close'], 2))
                        + '% Accuracy')
                  print('Prediction vs Prediction:\t' + str(round(100*simulation2[ma]['accuracy']['prediction vs prediction'], 2))
                        + '% Accuracy')
                  print('MSE:\t', simulation2[ma]['final']['mse'],
                        '\nRMSE:\t', simulation2[ma]['final']['rmse'],
                        '\nMAPE:\t', simulation2[ma]['final']['mae'])#,
                        # '\nMAPE:\t', simulation[ma]['final']['mape'])
            # else:
            #   break
    elapsed = timeit.default_timer() - start_time
    print('Runtime: mins:',elapsed/60)
SMA
SMA([input_arrays], [timeperiod=30])

Simple Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
17

Working on SMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.66 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4157.020, Time=0.04 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3687.148, Time=0.07 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.27 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3458.651, Time=0.09 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3322.133, Time=0.13 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=1.01 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=1.08 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3324.133, Time=0.26 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 3.637 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1657.067
Date:                Sun, 12 Dec 2021   AIC                           3322.133
Time:                        12:17:59   BIC                           3340.897
Sample:                             0   HQIC                          3329.339
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1966      0.003   -387.226      0.000      -1.203      -1.191
ar.L2         -0.8952      0.006   -138.692      0.000      -0.908      -0.883
ar.L3         -0.3968      0.006    -68.284      0.000      -0.408      -0.385
sigma2         3.5858      0.017    214.535      0.000       3.553       3.619
===================================================================================
Ljung-Box (L1) (Q):                  14.47   Jarque-Bera (JB):           2428881.42
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                             3.90
Prob(H) (two-sided):                  0.00   Kurtosis:                       271.99
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.03468, saving model to LSTM2.h5
81/81 - 7s - loss: 0.0868 - accuracy: 0.0000e+00 - val_loss: 0.0347 - val_accuracy: 0.0037 - lr: 0.0010 - 7s/epoch - 91ms/step
Epoch 2/500

Epoch 00002: val_loss improved from 0.03468 to 0.01570, saving model to LSTM2.h5
81/81 - 1s - loss: 0.0291 - accuracy: 0.0000e+00 - val_loss: 0.0157 - val_accuracy: 0.0037 - lr: 0.0010 - 736ms/epoch - 9ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.01570
81/81 - 1s - loss: 0.0267 - accuracy: 0.0000e+00 - val_loss: 0.0366 - val_accuracy: 0.0037 - lr: 0.0010 - 701ms/epoch - 9ms/step
Epoch 4/500

Epoch 00004: val_loss improved from 0.01570 to 0.00748, saving model to LSTM2.h5
81/81 - 1s - loss: 0.0350 - accuracy: 0.0000e+00 - val_loss: 0.0075 - val_accuracy: 0.0037 - lr: 0.0010 - 757ms/epoch - 9ms/step
Epoch 5/500

Epoch 00005: val_loss improved from 0.00748 to 0.00470, saving model to LSTM2.h5
81/81 - 1s - loss: 0.0181 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 0.0010 - 712ms/epoch - 9ms/step
Epoch 6/500

Epoch 00006: val_loss improved from 0.00470 to 0.00433, saving model to LSTM2.h5
81/81 - 1s - loss: 0.0171 - accuracy: 0.0000e+00 - val_loss: 0.0043 - val_accuracy: 0.0037 - lr: 0.0010 - 742ms/epoch - 9ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.00433
81/81 - 1s - loss: 0.0184 - accuracy: 0.0000e+00 - val_loss: 0.0050 - val_accuracy: 0.0037 - lr: 0.0010 - 741ms/epoch - 9ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.00433
81/81 - 1s - loss: 0.0190 - accuracy: 0.0000e+00 - val_loss: 0.0057 - val_accuracy: 0.0037 - lr: 0.0010 - 731ms/epoch - 9ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.00433
81/81 - 1s - loss: 0.0197 - accuracy: 0.0000e+00 - val_loss: 0.0065 - val_accuracy: 0.0037 - lr: 0.0010 - 669ms/epoch - 8ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.00433
81/81 - 1s - loss: 0.0198 - accuracy: 0.0000e+00 - val_loss: 0.0068 - val_accuracy: 0.0037 - lr: 0.0010 - 693ms/epoch - 9ms/step
Epoch 11/500

Epoch 00011: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00011: val_loss did not improve from 0.00433
81/81 - 1s - loss: 0.0186 - accuracy: 0.0000e+00 - val_loss: 0.0065 - val_accuracy: 0.0037 - lr: 0.0010 - 708ms/epoch - 9ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.00433
81/81 - 1s - loss: 0.0234 - accuracy: 0.0000e+00 - val_loss: 0.0519 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 727ms/epoch - 9ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.00433
81/81 - 1s - loss: 0.0046 - accuracy: 0.0000e+00 - val_loss: 0.0467 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 701ms/epoch - 9ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.00433
81/81 - 1s - loss: 0.0033 - accuracy: 0.0000e+00 - val_loss: 0.0426 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 687ms/epoch - 8ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.00433
81/81 - 1s - loss: 0.0024 - accuracy: 0.0000e+00 - val_loss: 0.0384 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 694ms/epoch - 9ms/step
Epoch 16/500

Epoch 00016: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00016: val_loss did not improve from 0.00433
81/81 - 1s - loss: 0.0018 - accuracy: 0.0000e+00 - val_loss: 0.0341 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 709ms/epoch - 9ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.00433
81/81 - 1s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0343 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 737ms/epoch - 9ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.00433
81/81 - 1s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0344 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 674ms/epoch - 8ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.00433
81/81 - 1s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0343 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 686ms/epoch - 8ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.00433
81/81 - 1s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0341 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 742ms/epoch - 9ms/step
Epoch 21/500

Epoch 00021: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00021: val_loss did not improve from 0.00433
81/81 - 1s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0338 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 711ms/epoch - 9ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.00433
81/81 - 1s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0335 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 685ms/epoch - 8ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.00433
81/81 - 1s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0330 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 679ms/epoch - 8ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.00433
81/81 - 1s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0326 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 690ms/epoch - 9ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.00433
81/81 - 1s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0320 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 717ms/epoch - 9ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.00433
81/81 - 1s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0315 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 672ms/epoch - 8ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.00433
81/81 - 1s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0310 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 747ms/epoch - 9ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.00433
81/81 - 1s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0304 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 693ms/epoch - 9ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.00433
81/81 - 1s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0298 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 683ms/epoch - 8ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.00433
81/81 - 1s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0292 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 689ms/epoch - 9ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.00433
81/81 - 1s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0286 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 666ms/epoch - 8ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.00433
81/81 - 1s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0280 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 702ms/epoch - 9ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.00433
81/81 - 1s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0273 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 695ms/epoch - 9ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.00433
81/81 - 1s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0267 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 734ms/epoch - 9ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.00433
81/81 - 1s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0260 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 686ms/epoch - 8ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.00433
81/81 - 1s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0254 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 691ms/epoch - 9ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.00433
81/81 - 1s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0247 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 690ms/epoch - 9ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.00433
81/81 - 1s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0241 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 694ms/epoch - 9ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.00433
81/81 - 1s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0234 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 695ms/epoch - 9ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.00433
81/81 - 1s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0227 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 698ms/epoch - 9ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.00433
81/81 - 1s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0220 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 691ms/epoch - 9ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.00433
81/81 - 1s - loss: 9.9353e-04 - accuracy: 0.0000e+00 - val_loss: 0.0214 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 700ms/epoch - 9ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.00433
81/81 - 1s - loss: 9.8646e-04 - accuracy: 0.0000e+00 - val_loss: 0.0207 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 691ms/epoch - 9ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.00433
81/81 - 1s - loss: 9.7985e-04 - accuracy: 0.0000e+00 - val_loss: 0.0200 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 680ms/epoch - 8ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.00433
81/81 - 1s - loss: 9.7366e-04 - accuracy: 0.0000e+00 - val_loss: 0.0194 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 691ms/epoch - 9ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.00433
81/81 - 1s - loss: 9.6786e-04 - accuracy: 0.0000e+00 - val_loss: 0.0187 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 703ms/epoch - 9ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.00433
81/81 - 1s - loss: 9.6241e-04 - accuracy: 0.0000e+00 - val_loss: 0.0181 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 689ms/epoch - 9ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.00433
81/81 - 1s - loss: 9.5728e-04 - accuracy: 0.0000e+00 - val_loss: 0.0175 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 699ms/epoch - 9ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.00433
81/81 - 1s - loss: 9.5243e-04 - accuracy: 0.0000e+00 - val_loss: 0.0169 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 733ms/epoch - 9ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.00433
81/81 - 1s - loss: 9.4784e-04 - accuracy: 0.0000e+00 - val_loss: 0.0162 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 692ms/epoch - 9ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.00433
81/81 - 1s - loss: 9.4348e-04 - accuracy: 0.0000e+00 - val_loss: 0.0156 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 717ms/epoch - 9ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.00433
81/81 - 1s - loss: 9.3932e-04 - accuracy: 0.0000e+00 - val_loss: 0.0151 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 731ms/epoch - 9ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.00433
81/81 - 1s - loss: 9.3533e-04 - accuracy: 0.0000e+00 - val_loss: 0.0145 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 691ms/epoch - 9ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.00433
81/81 - 1s - loss: 9.3151e-04 - accuracy: 0.0000e+00 - val_loss: 0.0139 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 695ms/epoch - 9ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.00433
81/81 - 1s - loss: 9.2781e-04 - accuracy: 0.0000e+00 - val_loss: 0.0134 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 711ms/epoch - 9ms/step
Epoch 56/500

Epoch 00056: val_loss did not improve from 0.00433
81/81 - 1s - loss: 9.2424e-04 - accuracy: 0.0000e+00 - val_loss: 0.0129 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 737ms/epoch - 9ms/step
Epoch 00056: early stopping
SMA
Prediction vs Close:		52.61% Accuracy
Prediction vs Prediction:	46.27% Accuracy
MSE:	 136.8787627980392 
RMSE:	 11.699519767838302 
MAPE:	 9.782684450137419
EMA
EMA([input_arrays], [timeperiod=30])

Exponential Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
51

Working on EMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.57 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4231.556, Time=0.04 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3761.238, Time=0.06 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.41 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3532.227, Time=0.09 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3394.496, Time=0.13 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=1.18 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=0.91 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3396.496, Time=0.27 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 3.680 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1693.248
Date:                Sun, 12 Dec 2021   AIC                           3394.496
Time:                        12:20:17   BIC                           3413.260
Sample:                             0   HQIC                          3401.702
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1982      0.003   -389.569      0.000      -1.204      -1.192
ar.L2         -0.8976      0.006   -139.811      0.000      -0.910      -0.885
ar.L3         -0.3984      0.006    -68.662      0.000      -0.410      -0.387
sigma2         3.9230      0.018    215.372      0.000       3.887       3.959
===================================================================================
Ljung-Box (L1) (Q):                  14.54   Jarque-Bera (JB):           2462173.05
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                             3.90
Prob(H) (two-sided):                  0.00   Kurtosis:                       273.82
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.06416, saving model to LSTM2.h5
81/81 - 6s - loss: 0.1950 - accuracy: 0.0000e+00 - val_loss: 0.0642 - val_accuracy: 0.0037 - lr: 0.0010 - 6s/epoch - 69ms/step
Epoch 2/500

Epoch 00002: val_loss improved from 0.06416 to 0.01511, saving model to LSTM2.h5
81/81 - 1s - loss: 0.0196 - accuracy: 0.0000e+00 - val_loss: 0.0151 - val_accuracy: 0.0037 - lr: 0.0010 - 702ms/epoch - 9ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.01511
81/81 - 1s - loss: 0.0041 - accuracy: 0.0000e+00 - val_loss: 0.0225 - val_accuracy: 0.0037 - lr: 0.0010 - 719ms/epoch - 9ms/step
Epoch 4/500

Epoch 00004: val_loss improved from 0.01511 to 0.01044, saving model to LSTM2.h5
81/81 - 1s - loss: 0.0062 - accuracy: 0.0000e+00 - val_loss: 0.0104 - val_accuracy: 0.0037 - lr: 0.0010 - 739ms/epoch - 9ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.01044
81/81 - 1s - loss: 0.0111 - accuracy: 0.0000e+00 - val_loss: 0.0126 - val_accuracy: 0.0037 - lr: 0.0010 - 666ms/epoch - 8ms/step
Epoch 6/500

Epoch 00006: val_loss did not improve from 0.01044
81/81 - 1s - loss: 0.0177 - accuracy: 0.0000e+00 - val_loss: 0.0192 - val_accuracy: 0.0037 - lr: 0.0010 - 677ms/epoch - 8ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.01044
81/81 - 1s - loss: 0.0174 - accuracy: 0.0000e+00 - val_loss: 0.0227 - val_accuracy: 0.0037 - lr: 0.0010 - 656ms/epoch - 8ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.01044
81/81 - 1s - loss: 0.0131 - accuracy: 0.0000e+00 - val_loss: 0.0333 - val_accuracy: 0.0037 - lr: 0.0010 - 676ms/epoch - 8ms/step
Epoch 9/500

Epoch 00009: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00009: val_loss did not improve from 0.01044
81/81 - 1s - loss: 0.0082 - accuracy: 0.0000e+00 - val_loss: 0.0487 - val_accuracy: 0.0037 - lr: 0.0010 - 676ms/epoch - 8ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.01044
81/81 - 1s - loss: 0.0130 - accuracy: 0.0000e+00 - val_loss: 0.0795 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 677ms/epoch - 8ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.01044
81/81 - 1s - loss: 0.0093 - accuracy: 0.0000e+00 - val_loss: 0.0790 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 697ms/epoch - 9ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.01044
81/81 - 1s - loss: 0.0082 - accuracy: 0.0000e+00 - val_loss: 0.0843 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 676ms/epoch - 8ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.01044
81/81 - 1s - loss: 0.0073 - accuracy: 0.0000e+00 - val_loss: 0.0910 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 677ms/epoch - 8ms/step
Epoch 14/500

Epoch 00014: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00014: val_loss did not improve from 0.01044
81/81 - 1s - loss: 0.0066 - accuracy: 0.0000e+00 - val_loss: 0.0974 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 686ms/epoch - 8ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.01044
81/81 - 1s - loss: 0.0060 - accuracy: 0.0000e+00 - val_loss: 0.1040 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 684ms/epoch - 8ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.01044
81/81 - 1s - loss: 0.0053 - accuracy: 0.0000e+00 - val_loss: 0.1093 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 683ms/epoch - 8ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.01044
81/81 - 1s - loss: 0.0049 - accuracy: 0.0000e+00 - val_loss: 0.1133 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 684ms/epoch - 8ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.01044
81/81 - 1s - loss: 0.0046 - accuracy: 0.0000e+00 - val_loss: 0.1163 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 675ms/epoch - 8ms/step
Epoch 19/500

Epoch 00019: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00019: val_loss did not improve from 0.01044
81/81 - 1s - loss: 0.0045 - accuracy: 0.0000e+00 - val_loss: 0.1185 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 675ms/epoch - 8ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.01044
81/81 - 1s - loss: 0.0044 - accuracy: 0.0000e+00 - val_loss: 0.1201 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 684ms/epoch - 8ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.01044
81/81 - 1s - loss: 0.0043 - accuracy: 0.0000e+00 - val_loss: 0.1213 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 681ms/epoch - 8ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.01044
81/81 - 1s - loss: 0.0042 - accuracy: 0.0000e+00 - val_loss: 0.1222 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 671ms/epoch - 8ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.01044
81/81 - 1s - loss: 0.0042 - accuracy: 0.0000e+00 - val_loss: 0.1230 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 697ms/epoch - 9ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.01044
81/81 - 1s - loss: 0.0041 - accuracy: 0.0000e+00 - val_loss: 0.1237 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 731ms/epoch - 9ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.01044
81/81 - 1s - loss: 0.0041 - accuracy: 0.0000e+00 - val_loss: 0.1243 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 676ms/epoch - 8ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.01044
81/81 - 1s - loss: 0.0040 - accuracy: 0.0000e+00 - val_loss: 0.1249 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 697ms/epoch - 9ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.01044
81/81 - 1s - loss: 0.0040 - accuracy: 0.0000e+00 - val_loss: 0.1256 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 689ms/epoch - 9ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.01044
81/81 - 1s - loss: 0.0039 - accuracy: 0.0000e+00 - val_loss: 0.1262 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 670ms/epoch - 8ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.01044
81/81 - 1s - loss: 0.0039 - accuracy: 0.0000e+00 - val_loss: 0.1268 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 705ms/epoch - 9ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.01044
81/81 - 1s - loss: 0.0038 - accuracy: 0.0000e+00 - val_loss: 0.1275 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 724ms/epoch - 9ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.01044
81/81 - 1s - loss: 0.0037 - accuracy: 0.0000e+00 - val_loss: 0.1281 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 697ms/epoch - 9ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.01044
81/81 - 1s - loss: 0.0037 - accuracy: 0.0000e+00 - val_loss: 0.1288 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 673ms/epoch - 8ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.01044
81/81 - 1s - loss: 0.0036 - accuracy: 0.0000e+00 - val_loss: 0.1295 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 676ms/epoch - 8ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.01044
81/81 - 1s - loss: 0.0036 - accuracy: 0.0000e+00 - val_loss: 0.1301 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 693ms/epoch - 9ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.01044
81/81 - 1s - loss: 0.0035 - accuracy: 0.0000e+00 - val_loss: 0.1307 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 681ms/epoch - 8ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.01044
81/81 - 1s - loss: 0.0035 - accuracy: 0.0000e+00 - val_loss: 0.1313 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 713ms/epoch - 9ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.01044
81/81 - 1s - loss: 0.0034 - accuracy: 0.0000e+00 - val_loss: 0.1318 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 701ms/epoch - 9ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.01044
81/81 - 1s - loss: 0.0034 - accuracy: 0.0000e+00 - val_loss: 0.1324 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 672ms/epoch - 8ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.01044
81/81 - 1s - loss: 0.0033 - accuracy: 0.0000e+00 - val_loss: 0.1328 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 686ms/epoch - 8ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.01044
81/81 - 1s - loss: 0.0033 - accuracy: 0.0000e+00 - val_loss: 0.1332 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 687ms/epoch - 8ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.01044
81/81 - 1s - loss: 0.0032 - accuracy: 0.0000e+00 - val_loss: 0.1335 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 678ms/epoch - 8ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.01044
81/81 - 1s - loss: 0.0032 - accuracy: 0.0000e+00 - val_loss: 0.1338 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 693ms/epoch - 9ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.01044
81/81 - 1s - loss: 0.0031 - accuracy: 0.0000e+00 - val_loss: 0.1340 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 662ms/epoch - 8ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.01044
81/81 - 1s - loss: 0.0031 - accuracy: 0.0000e+00 - val_loss: 0.1341 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 672ms/epoch - 8ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.01044
81/81 - 1s - loss: 0.0030 - accuracy: 0.0000e+00 - val_loss: 0.1341 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 672ms/epoch - 8ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.01044
81/81 - 1s - loss: 0.0030 - accuracy: 0.0000e+00 - val_loss: 0.1341 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 682ms/epoch - 8ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.01044
81/81 - 1s - loss: 0.0029 - accuracy: 0.0000e+00 - val_loss: 0.1339 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 667ms/epoch - 8ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.01044
81/81 - 1s - loss: 0.0029 - accuracy: 0.0000e+00 - val_loss: 0.1337 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 729ms/epoch - 9ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.01044
81/81 - 1s - loss: 0.0028 - accuracy: 0.0000e+00 - val_loss: 0.1333 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 691ms/epoch - 9ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.01044
81/81 - 1s - loss: 0.0028 - accuracy: 0.0000e+00 - val_loss: 0.1329 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 675ms/epoch - 8ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.01044
81/81 - 1s - loss: 0.0027 - accuracy: 0.0000e+00 - val_loss: 0.1323 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 713ms/epoch - 9ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.01044
81/81 - 1s - loss: 0.0027 - accuracy: 0.0000e+00 - val_loss: 0.1317 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 714ms/epoch - 9ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.01044
81/81 - 1s - loss: 0.0027 - accuracy: 0.0000e+00 - val_loss: 0.1309 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 693ms/epoch - 9ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.01044
81/81 - 1s - loss: 0.0026 - accuracy: 0.0000e+00 - val_loss: 0.1301 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 679ms/epoch - 8ms/step
Epoch 00054: early stopping
SMA
Prediction vs Close:		52.61% Accuracy
Prediction vs Prediction:	46.27% Accuracy
MSE:	 136.8787627980392 
RMSE:	 11.699519767838302 
MAPE:	 9.782684450137419

EMA
Prediction vs Close:		52.24% Accuracy
Prediction vs Prediction:	44.78% Accuracy
MSE:	 325.2876788257106 
RMSE:	 18.035733387520192 
MAPE:	 15.58326491461623
WMA
WMA([input_arrays], [timeperiod=30])

Weighted Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
49

Working on WMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.57 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4264.089, Time=0.05 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3793.930, Time=0.06 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.34 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3564.923, Time=0.09 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3427.258, Time=0.12 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=1.68 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=0.62 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3429.258, Time=0.27 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 3.816 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1709.629
Date:                Sun, 12 Dec 2021   AIC                           3427.258
Time:                        12:22:33   BIC                           3446.021
Sample:                             0   HQIC                          3434.464
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1981      0.003   -389.386      0.000      -1.204      -1.192
ar.L2         -0.8974      0.006   -139.699      0.000      -0.910      -0.885
ar.L3         -0.3983      0.006    -68.737      0.000      -0.410      -0.387
sigma2         4.0860      0.019    215.311      0.000       4.049       4.123
===================================================================================
Ljung-Box (L1) (Q):                  14.57   Jarque-Bera (JB):           2460901.70
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                             3.90
Prob(H) (two-sided):                  0.00   Kurtosis:                       273.75
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.17237, saving model to LSTM2.h5
81/81 - 6s - loss: 0.1566 - accuracy: 0.0000e+00 - val_loss: 0.1724 - val_accuracy: 0.0000e+00 - lr: 0.0010 - 6s/epoch - 76ms/step
Epoch 2/500

Epoch 00002: val_loss improved from 0.17237 to 0.01302, saving model to LSTM2.h5
81/81 - 1s - loss: 0.0208 - accuracy: 0.0000e+00 - val_loss: 0.0130 - val_accuracy: 0.0037 - lr: 0.0010 - 744ms/epoch - 9ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.01302
81/81 - 1s - loss: 0.0153 - accuracy: 0.0000e+00 - val_loss: 0.0210 - val_accuracy: 0.0037 - lr: 0.0010 - 686ms/epoch - 8ms/step
Epoch 4/500

Epoch 00004: val_loss improved from 0.01302 to 0.00607, saving model to LSTM2.h5
81/81 - 1s - loss: 0.0182 - accuracy: 0.0000e+00 - val_loss: 0.0061 - val_accuracy: 0.0037 - lr: 0.0010 - 709ms/epoch - 9ms/step
Epoch 5/500

Epoch 00005: val_loss improved from 0.00607 to 0.00465, saving model to LSTM2.h5
81/81 - 1s - loss: 0.0144 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 0.0010 - 725ms/epoch - 9ms/step
Epoch 6/500

Epoch 00006: val_loss did not improve from 0.00465
81/81 - 1s - loss: 0.0181 - accuracy: 0.0000e+00 - val_loss: 0.0048 - val_accuracy: 0.0037 - lr: 0.0010 - 726ms/epoch - 9ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.00465
81/81 - 1s - loss: 0.0242 - accuracy: 0.0000e+00 - val_loss: 0.0050 - val_accuracy: 0.0037 - lr: 0.0010 - 724ms/epoch - 9ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.00465
81/81 - 1s - loss: 0.0260 - accuracy: 0.0000e+00 - val_loss: 0.0054 - val_accuracy: 0.0037 - lr: 0.0010 - 683ms/epoch - 8ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.00465
81/81 - 1s - loss: 0.0239 - accuracy: 0.0000e+00 - val_loss: 0.0052 - val_accuracy: 0.0037 - lr: 0.0010 - 692ms/epoch - 9ms/step
Epoch 10/500

Epoch 00010: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00010: val_loss did not improve from 0.00465
81/81 - 1s - loss: 0.0197 - accuracy: 0.0000e+00 - val_loss: 0.0054 - val_accuracy: 0.0037 - lr: 0.0010 - 689ms/epoch - 9ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.00465
81/81 - 1s - loss: 0.0184 - accuracy: 0.0000e+00 - val_loss: 0.0307 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 679ms/epoch - 8ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.00465
81/81 - 1s - loss: 0.0041 - accuracy: 0.0000e+00 - val_loss: 0.0268 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 674ms/epoch - 8ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.00465
81/81 - 1s - loss: 0.0029 - accuracy: 0.0000e+00 - val_loss: 0.0233 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 703ms/epoch - 9ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.00465
81/81 - 1s - loss: 0.0022 - accuracy: 0.0000e+00 - val_loss: 0.0203 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 704ms/epoch - 9ms/step
Epoch 15/500

Epoch 00015: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00015: val_loss did not improve from 0.00465
81/81 - 1s - loss: 0.0017 - accuracy: 0.0000e+00 - val_loss: 0.0177 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 695ms/epoch - 9ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.00465
81/81 - 1s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0177 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 705ms/epoch - 9ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.00465
81/81 - 1s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0177 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 690ms/epoch - 9ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.00465
81/81 - 1s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0176 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 672ms/epoch - 8ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.00465
81/81 - 1s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0174 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 690ms/epoch - 9ms/step
Epoch 20/500

Epoch 00020: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00020: val_loss did not improve from 0.00465
81/81 - 1s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0173 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 688ms/epoch - 8ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.00465
81/81 - 1s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0170 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 672ms/epoch - 8ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.00465
81/81 - 1s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0168 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 733ms/epoch - 9ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.00465
81/81 - 1s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0165 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 739ms/epoch - 9ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.00465
81/81 - 1s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0163 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 684ms/epoch - 8ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.00465
81/81 - 1s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0160 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 680ms/epoch - 8ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.00465
81/81 - 1s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0157 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 680ms/epoch - 8ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.00465
81/81 - 1s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0154 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 673ms/epoch - 8ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.00465
81/81 - 1s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0151 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 664ms/epoch - 8ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.00465
81/81 - 1s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0148 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 663ms/epoch - 8ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.00465
81/81 - 1s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0145 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 741ms/epoch - 9ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.00465
81/81 - 1s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0141 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 703ms/epoch - 9ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.00465
81/81 - 1s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0138 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 732ms/epoch - 9ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.00465
81/81 - 1s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0135 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 744ms/epoch - 9ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.00465
81/81 - 1s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0132 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 688ms/epoch - 8ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.00465
81/81 - 1s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0129 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 665ms/epoch - 8ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.00465
81/81 - 1s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0126 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 711ms/epoch - 9ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.00465
81/81 - 1s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0122 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 688ms/epoch - 8ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.00465
81/81 - 1s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0119 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 682ms/epoch - 8ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.00465
81/81 - 1s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0116 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 683ms/epoch - 8ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.00465
81/81 - 1s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0113 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 739ms/epoch - 9ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.00465
81/81 - 1s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0110 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 713ms/epoch - 9ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.00465
81/81 - 1s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0107 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 687ms/epoch - 8ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.00465
81/81 - 1s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0104 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 679ms/epoch - 8ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.00465
81/81 - 1s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0101 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 679ms/epoch - 8ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.00465
81/81 - 1s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0098 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 702ms/epoch - 9ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.00465
81/81 - 1s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0095 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 666ms/epoch - 8ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.00465
81/81 - 1s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0092 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 708ms/epoch - 9ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.00465
81/81 - 1s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0090 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 735ms/epoch - 9ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.00465
81/81 - 1s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0087 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 676ms/epoch - 8ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.00465
81/81 - 1s - loss: 9.9766e-04 - accuracy: 0.0000e+00 - val_loss: 0.0084 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 729ms/epoch - 9ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.00465
81/81 - 1s - loss: 9.9414e-04 - accuracy: 0.0000e+00 - val_loss: 0.0082 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 681ms/epoch - 8ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.00465
81/81 - 1s - loss: 9.9079e-04 - accuracy: 0.0000e+00 - val_loss: 0.0079 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 714ms/epoch - 9ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.00465
81/81 - 1s - loss: 9.8760e-04 - accuracy: 0.0000e+00 - val_loss: 0.0077 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 722ms/epoch - 9ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.00465
81/81 - 1s - loss: 9.8453e-04 - accuracy: 0.0000e+00 - val_loss: 0.0075 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 701ms/epoch - 9ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.00465
81/81 - 1s - loss: 9.8158e-04 - accuracy: 0.0000e+00 - val_loss: 0.0072 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 697ms/epoch - 9ms/step
Epoch 00055: early stopping
SMA
Prediction vs Close:		52.61% Accuracy
Prediction vs Prediction:	46.27% Accuracy
MSE:	 136.8787627980392 
RMSE:	 11.699519767838302 
MAPE:	 9.782684450137419

EMA
Prediction vs Close:		52.24% Accuracy
Prediction vs Prediction:	44.78% Accuracy
MSE:	 325.2876788257106 
RMSE:	 18.035733387520192 
MAPE:	 15.58326491461623

WMA
Prediction vs Close:		54.85% Accuracy
Prediction vs Prediction:	43.28% Accuracy
MSE:	 142.26654445774975 
RMSE:	 11.927554001460221 
MAPE:	 9.787176512242624
DEMA
DEMA([input_arrays], [timeperiod=30])

Double Exponential Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
89

Working on DEMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.58 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4436.126, Time=0.04 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3965.317, Time=0.06 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.53 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3736.589, Time=0.08 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3598.951, Time=0.11 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=1.25 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=1.29 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3600.951, Time=0.24 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 4.207 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1795.475
Date:                Sun, 12 Dec 2021   AIC                           3598.951
Time:                        12:24:49   BIC                           3617.714
Sample:                             0   HQIC                          3606.157
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1983      0.003   -389.581      0.000      -1.204      -1.192
ar.L2         -0.8973      0.006   -139.732      0.000      -0.910      -0.885
ar.L3         -0.3983      0.006    -68.649      0.000      -0.410      -0.387
sigma2         5.0573      0.023    215.292      0.000       5.011       5.103
===================================================================================
Ljung-Box (L1) (Q):                  14.41   Jarque-Bera (JB):           2460553.80
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                             3.89
Prob(H) (two-sided):                  0.00   Kurtosis:                       273.74
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.07371, saving model to LSTM2.h5
81/81 - 6s - loss: 0.0925 - accuracy: 0.0000e+00 - val_loss: 0.0737 - val_accuracy: 0.0037 - lr: 0.0010 - 6s/epoch - 77ms/step
Epoch 2/500

Epoch 00002: val_loss improved from 0.07371 to 0.01875, saving model to LSTM2.h5
81/81 - 1s - loss: 0.0304 - accuracy: 0.0000e+00 - val_loss: 0.0187 - val_accuracy: 0.0037 - lr: 0.0010 - 735ms/epoch - 9ms/step
Epoch 3/500

Epoch 00003: val_loss improved from 0.01875 to 0.00901, saving model to LSTM2.h5
81/81 - 1s - loss: 0.0406 - accuracy: 0.0000e+00 - val_loss: 0.0090 - val_accuracy: 0.0037 - lr: 0.0010 - 738ms/epoch - 9ms/step
Epoch 4/500

Epoch 00004: val_loss improved from 0.00901 to 0.00558, saving model to LSTM2.h5
81/81 - 1s - loss: 0.0309 - accuracy: 0.0000e+00 - val_loss: 0.0056 - val_accuracy: 0.0037 - lr: 0.0010 - 728ms/epoch - 9ms/step
Epoch 5/500

Epoch 00005: val_loss improved from 0.00558 to 0.00430, saving model to LSTM2.h5
81/81 - 1s - loss: 0.0301 - accuracy: 0.0000e+00 - val_loss: 0.0043 - val_accuracy: 0.0037 - lr: 0.0010 - 746ms/epoch - 9ms/step
Epoch 6/500

Epoch 00006: val_loss improved from 0.00430 to 0.00422, saving model to LSTM2.h5
81/81 - 1s - loss: 0.0165 - accuracy: 0.0000e+00 - val_loss: 0.0042 - val_accuracy: 0.0037 - lr: 0.0010 - 728ms/epoch - 9ms/step
Epoch 7/500

Epoch 00007: val_loss improved from 0.00422 to 0.00343, saving model to LSTM2.h5
81/81 - 1s - loss: 0.0108 - accuracy: 0.0000e+00 - val_loss: 0.0034 - val_accuracy: 0.0037 - lr: 0.0010 - 736ms/epoch - 9ms/step
Epoch 8/500

Epoch 00008: val_loss improved from 0.00343 to 0.00311, saving model to LSTM2.h5
81/81 - 1s - loss: 0.0088 - accuracy: 0.0000e+00 - val_loss: 0.0031 - val_accuracy: 0.0037 - lr: 0.0010 - 743ms/epoch - 9ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.00311
81/81 - 1s - loss: 0.0087 - accuracy: 0.0000e+00 - val_loss: 0.0033 - val_accuracy: 0.0037 - lr: 0.0010 - 709ms/epoch - 9ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.00311
81/81 - 1s - loss: 0.0096 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 0.0010 - 699ms/epoch - 9ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.00311
81/81 - 1s - loss: 0.0110 - accuracy: 0.0000e+00 - val_loss: 0.0039 - val_accuracy: 0.0037 - lr: 0.0010 - 693ms/epoch - 9ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.00311
81/81 - 1s - loss: 0.0122 - accuracy: 0.0000e+00 - val_loss: 0.0042 - val_accuracy: 0.0037 - lr: 0.0010 - 708ms/epoch - 9ms/step
Epoch 13/500

Epoch 00013: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00013: val_loss did not improve from 0.00311
81/81 - 1s - loss: 0.0128 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 0.0010 - 707ms/epoch - 9ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.00311
81/81 - 1s - loss: 0.0172 - accuracy: 0.0000e+00 - val_loss: 0.0092 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 737ms/epoch - 9ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.00311
81/81 - 1s - loss: 0.0035 - accuracy: 0.0000e+00 - val_loss: 0.0063 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 725ms/epoch - 9ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.00311
81/81 - 1s - loss: 0.0025 - accuracy: 0.0000e+00 - val_loss: 0.0048 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 689ms/epoch - 9ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.00311
81/81 - 1s - loss: 0.0019 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 698ms/epoch - 9ms/step
Epoch 18/500

Epoch 00018: val_loss improved from 0.00311 to 0.00292, saving model to LSTM2.h5
81/81 - 1s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0029 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 905ms/epoch - 11ms/step
Epoch 19/500

Epoch 00019: val_loss improved from 0.00292 to 0.00250, saving model to LSTM2.h5
81/81 - 1s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0025 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 735ms/epoch - 9ms/step
Epoch 20/500

Epoch 00020: val_loss improved from 0.00250 to 0.00242, saving model to LSTM2.h5
81/81 - 1s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0024 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 744ms/epoch - 9ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.00242
81/81 - 1s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0026 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 690ms/epoch - 9ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.00242
81/81 - 1s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0031 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 715ms/epoch - 9ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.00242
81/81 - 1s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 714ms/epoch - 9ms/step
Epoch 24/500

Epoch 00024: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00024: val_loss did not improve from 0.00242
81/81 - 1s - loss: 9.9079e-04 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 691ms/epoch - 9ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.00242
81/81 - 1s - loss: 9.8979e-04 - accuracy: 0.0000e+00 - val_loss: 0.0043 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 721ms/epoch - 9ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.00242
81/81 - 1s - loss: 9.5139e-04 - accuracy: 0.0000e+00 - val_loss: 0.0042 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 709ms/epoch - 9ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.00242
81/81 - 1s - loss: 9.3537e-04 - accuracy: 0.0000e+00 - val_loss: 0.0042 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 715ms/epoch - 9ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.00242
81/81 - 1s - loss: 9.2832e-04 - accuracy: 0.0000e+00 - val_loss: 0.0042 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 694ms/epoch - 9ms/step
Epoch 29/500

Epoch 00029: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00029: val_loss did not improve from 0.00242
81/81 - 1s - loss: 9.2482e-04 - accuracy: 0.0000e+00 - val_loss: 0.0042 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 721ms/epoch - 9ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.00242
81/81 - 1s - loss: 9.2276e-04 - accuracy: 0.0000e+00 - val_loss: 0.0042 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 709ms/epoch - 9ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.00242
81/81 - 1s - loss: 9.2131e-04 - accuracy: 0.0000e+00 - val_loss: 0.0043 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 714ms/epoch - 9ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.00242
81/81 - 1s - loss: 9.2015e-04 - accuracy: 0.0000e+00 - val_loss: 0.0043 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 728ms/epoch - 9ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.00242
81/81 - 1s - loss: 9.1911e-04 - accuracy: 0.0000e+00 - val_loss: 0.0044 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 720ms/epoch - 9ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.00242
81/81 - 1s - loss: 9.1814e-04 - accuracy: 0.0000e+00 - val_loss: 0.0044 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 701ms/epoch - 9ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.00242
81/81 - 1s - loss: 9.1719e-04 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 708ms/epoch - 9ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.00242
81/81 - 1s - loss: 9.1625e-04 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 755ms/epoch - 9ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.00242
81/81 - 1s - loss: 9.1531e-04 - accuracy: 0.0000e+00 - val_loss: 0.0046 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 714ms/epoch - 9ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.00242
81/81 - 1s - loss: 9.1435e-04 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 705ms/epoch - 9ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.00242
81/81 - 1s - loss: 9.1338e-04 - accuracy: 0.0000e+00 - val_loss: 0.0048 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 709ms/epoch - 9ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.00242
81/81 - 1s - loss: 9.1240e-04 - accuracy: 0.0000e+00 - val_loss: 0.0049 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 703ms/epoch - 9ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.00242
81/81 - 1s - loss: 9.1140e-04 - accuracy: 0.0000e+00 - val_loss: 0.0049 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 706ms/epoch - 9ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.00242
81/81 - 1s - loss: 9.1038e-04 - accuracy: 0.0000e+00 - val_loss: 0.0050 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 691ms/epoch - 9ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.00242
81/81 - 1s - loss: 9.0935e-04 - accuracy: 0.0000e+00 - val_loss: 0.0051 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 701ms/epoch - 9ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.00242
81/81 - 1s - loss: 9.0829e-04 - accuracy: 0.0000e+00 - val_loss: 0.0052 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 700ms/epoch - 9ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.00242
81/81 - 1s - loss: 9.0722e-04 - accuracy: 0.0000e+00 - val_loss: 0.0053 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 694ms/epoch - 9ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.00242
81/81 - 1s - loss: 9.0612e-04 - accuracy: 0.0000e+00 - val_loss: 0.0054 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 691ms/epoch - 9ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.00242
81/81 - 1s - loss: 9.0500e-04 - accuracy: 0.0000e+00 - val_loss: 0.0055 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 704ms/epoch - 9ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.00242
81/81 - 1s - loss: 9.0385e-04 - accuracy: 0.0000e+00 - val_loss: 0.0056 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 673ms/epoch - 8ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.00242
81/81 - 1s - loss: 9.0269e-04 - accuracy: 0.0000e+00 - val_loss: 0.0058 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 693ms/epoch - 9ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.00242
81/81 - 1s - loss: 9.0149e-04 - accuracy: 0.0000e+00 - val_loss: 0.0059 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 704ms/epoch - 9ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.00242
81/81 - 1s - loss: 9.0027e-04 - accuracy: 0.0000e+00 - val_loss: 0.0060 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 684ms/epoch - 8ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.00242
81/81 - 1s - loss: 8.9903e-04 - accuracy: 0.0000e+00 - val_loss: 0.0061 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 733ms/epoch - 9ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.00242
81/81 - 1s - loss: 8.9775e-04 - accuracy: 0.0000e+00 - val_loss: 0.0063 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 711ms/epoch - 9ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.00242
81/81 - 1s - loss: 8.9645e-04 - accuracy: 0.0000e+00 - val_loss: 0.0064 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 736ms/epoch - 9ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.00242
81/81 - 1s - loss: 8.9512e-04 - accuracy: 0.0000e+00 - val_loss: 0.0066 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 730ms/epoch - 9ms/step
Epoch 56/500

Epoch 00056: val_loss did not improve from 0.00242
81/81 - 1s - loss: 8.9376e-04 - accuracy: 0.0000e+00 - val_loss: 0.0067 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 713ms/epoch - 9ms/step
Epoch 57/500

Epoch 00057: val_loss did not improve from 0.00242
81/81 - 1s - loss: 8.9238e-04 - accuracy: 0.0000e+00 - val_loss: 0.0068 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 704ms/epoch - 9ms/step
Epoch 58/500

Epoch 00058: val_loss did not improve from 0.00242
81/81 - 1s - loss: 8.9096e-04 - accuracy: 0.0000e+00 - val_loss: 0.0070 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 688ms/epoch - 8ms/step
Epoch 59/500

Epoch 00059: val_loss did not improve from 0.00242
81/81 - 1s - loss: 8.8951e-04 - accuracy: 0.0000e+00 - val_loss: 0.0072 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 686ms/epoch - 8ms/step
Epoch 60/500

Epoch 00060: val_loss did not improve from 0.00242
81/81 - 1s - loss: 8.8803e-04 - accuracy: 0.0000e+00 - val_loss: 0.0073 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 710ms/epoch - 9ms/step
Epoch 61/500

Epoch 00061: val_loss did not improve from 0.00242
81/81 - 1s - loss: 8.8652e-04 - accuracy: 0.0000e+00 - val_loss: 0.0075 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 680ms/epoch - 8ms/step
Epoch 62/500

Epoch 00062: val_loss did not improve from 0.00242
81/81 - 1s - loss: 8.8497e-04 - accuracy: 0.0000e+00 - val_loss: 0.0076 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 679ms/epoch - 8ms/step
Epoch 63/500

Epoch 00063: val_loss did not improve from 0.00242
81/81 - 1s - loss: 8.8340e-04 - accuracy: 0.0000e+00 - val_loss: 0.0078 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 664ms/epoch - 8ms/step
Epoch 64/500

Epoch 00064: val_loss did not improve from 0.00242
81/81 - 1s - loss: 8.8179e-04 - accuracy: 0.0000e+00 - val_loss: 0.0080 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 674ms/epoch - 8ms/step
Epoch 65/500

Epoch 00065: val_loss did not improve from 0.00242
81/81 - 1s - loss: 8.8015e-04 - accuracy: 0.0000e+00 - val_loss: 0.0082 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 669ms/epoch - 8ms/step
Epoch 66/500

Epoch 00066: val_loss did not improve from 0.00242
81/81 - 1s - loss: 8.7847e-04 - accuracy: 0.0000e+00 - val_loss: 0.0083 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 668ms/epoch - 8ms/step
Epoch 67/500

Epoch 00067: val_loss did not improve from 0.00242
81/81 - 1s - loss: 8.7676e-04 - accuracy: 0.0000e+00 - val_loss: 0.0085 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 658ms/epoch - 8ms/step
Epoch 68/500

Epoch 00068: val_loss did not improve from 0.00242
81/81 - 1s - loss: 8.7502e-04 - accuracy: 0.0000e+00 - val_loss: 0.0087 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 695ms/epoch - 9ms/step
Epoch 69/500

Epoch 00069: val_loss did not improve from 0.00242
81/81 - 1s - loss: 8.7325e-04 - accuracy: 0.0000e+00 - val_loss: 0.0089 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 660ms/epoch - 8ms/step
Epoch 70/500

Epoch 00070: val_loss did not improve from 0.00242
81/81 - 1s - loss: 8.7144e-04 - accuracy: 0.0000e+00 - val_loss: 0.0090 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 692ms/epoch - 9ms/step
Epoch 00070: early stopping
SMA
Prediction vs Close:		52.61% Accuracy
Prediction vs Prediction:	46.27% Accuracy
MSE:	 136.8787627980392 
RMSE:	 11.699519767838302 
MAPE:	 9.782684450137419

EMA
Prediction vs Close:		52.24% Accuracy
Prediction vs Prediction:	44.78% Accuracy
MSE:	 325.2876788257106 
RMSE:	 18.035733387520192 
MAPE:	 15.58326491461623

WMA
Prediction vs Close:		54.85% Accuracy
Prediction vs Prediction:	43.28% Accuracy
MSE:	 142.26654445774975 
RMSE:	 11.927554001460221 
MAPE:	 9.787176512242624

DEMA
Prediction vs Close:		52.24% Accuracy
Prediction vs Prediction:	45.52% Accuracy
MSE:	 171.26734938505615 
RMSE:	 13.086915197442679 
MAPE:	 11.821213958102536
KAMA
KAMA([input_arrays], [timeperiod=30])

Kaufman Adaptive Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
18

Working on KAMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.53 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4190.464, Time=0.04 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3724.371, Time=0.06 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.39 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3494.154, Time=0.10 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3357.435, Time=0.13 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=1.57 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=0.99 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3359.435, Time=0.28 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 4.087 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1674.717
Date:                Sun, 12 Dec 2021   AIC                           3357.435
Time:                        12:27:40   BIC                           3376.198
Sample:                             0   HQIC                          3364.641
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1955      0.003   -381.246      0.000      -1.202      -1.189
ar.L2         -0.8964      0.007   -135.835      0.000      -0.909      -0.883
ar.L3         -0.3971      0.006    -67.229      0.000      -0.409      -0.385
sigma2         3.7466      0.018    211.623      0.000       3.712       3.781
===================================================================================
Ljung-Box (L1) (Q):                  14.20   Jarque-Bera (JB):           2338363.32
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.01   Skew:                             3.76
Prob(H) (two-sided):                  0.00   Kurtosis:                       266.93
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.09321, saving model to LSTM2.h5
81/81 - 6s - loss: 0.1163 - accuracy: 0.0000e+00 - val_loss: 0.0932 - val_accuracy: 0.0037 - lr: 0.0010 - 6s/epoch - 71ms/step
Epoch 2/500

Epoch 00002: val_loss improved from 0.09321 to 0.07989, saving model to LSTM2.h5
81/81 - 1s - loss: 0.0552 - accuracy: 0.0000e+00 - val_loss: 0.0799 - val_accuracy: 0.0037 - lr: 0.0010 - 750ms/epoch - 9ms/step
Epoch 3/500

Epoch 00003: val_loss improved from 0.07989 to 0.01022, saving model to LSTM2.h5
81/81 - 1s - loss: 0.0555 - accuracy: 0.0000e+00 - val_loss: 0.0102 - val_accuracy: 0.0037 - lr: 0.0010 - 741ms/epoch - 9ms/step
Epoch 4/500

Epoch 00004: val_loss improved from 0.01022 to 0.00514, saving model to LSTM2.h5
81/81 - 1s - loss: 0.0274 - accuracy: 0.0000e+00 - val_loss: 0.0051 - val_accuracy: 0.0037 - lr: 0.0010 - 702ms/epoch - 9ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.00514
81/81 - 1s - loss: 0.0227 - accuracy: 0.0000e+00 - val_loss: 0.0059 - val_accuracy: 0.0037 - lr: 0.0010 - 707ms/epoch - 9ms/step
Epoch 6/500

Epoch 00006: val_loss improved from 0.00514 to 0.00433, saving model to LSTM2.h5
81/81 - 1s - loss: 0.0094 - accuracy: 0.0000e+00 - val_loss: 0.0043 - val_accuracy: 0.0037 - lr: 0.0010 - 762ms/epoch - 9ms/step
Epoch 7/500

Epoch 00007: val_loss improved from 0.00433 to 0.00403, saving model to LSTM2.h5
81/81 - 1s - loss: 0.0060 - accuracy: 0.0000e+00 - val_loss: 0.0040 - val_accuracy: 0.0037 - lr: 0.0010 - 717ms/epoch - 9ms/step
Epoch 8/500

Epoch 00008: val_loss improved from 0.00403 to 0.00382, saving model to LSTM2.h5
81/81 - 1s - loss: 0.0054 - accuracy: 0.0000e+00 - val_loss: 0.0038 - val_accuracy: 0.0037 - lr: 0.0010 - 729ms/epoch - 9ms/step
Epoch 9/500

Epoch 00009: val_loss improved from 0.00382 to 0.00375, saving model to LSTM2.h5
81/81 - 1s - loss: 0.0056 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 0.0010 - 741ms/epoch - 9ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.00375
81/81 - 1s - loss: 0.0078 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 0.0010 - 705ms/epoch - 9ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.00375
81/81 - 1s - loss: 0.0100 - accuracy: 0.0000e+00 - val_loss: 0.0051 - val_accuracy: 0.0037 - lr: 0.0010 - 700ms/epoch - 9ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.00375
81/81 - 1s - loss: 0.0148 - accuracy: 0.0000e+00 - val_loss: 0.0052 - val_accuracy: 0.0037 - lr: 0.0010 - 749ms/epoch - 9ms/step
Epoch 13/500

Epoch 00013: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00013: val_loss did not improve from 0.00375
81/81 - 1s - loss: 0.0163 - accuracy: 0.0000e+00 - val_loss: 0.0050 - val_accuracy: 0.0037 - lr: 0.0010 - 691ms/epoch - 9ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.00375
81/81 - 1s - loss: 0.0203 - accuracy: 0.0000e+00 - val_loss: 0.0074 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 673ms/epoch - 8ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.00375
81/81 - 1s - loss: 0.0051 - accuracy: 0.0000e+00 - val_loss: 0.0060 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 700ms/epoch - 9ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.00375
81/81 - 1s - loss: 0.0035 - accuracy: 0.0000e+00 - val_loss: 0.0049 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 693ms/epoch - 9ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.00375
81/81 - 1s - loss: 0.0025 - accuracy: 0.0000e+00 - val_loss: 0.0039 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 689ms/epoch - 9ms/step
Epoch 18/500

Epoch 00018: val_loss improved from 0.00375 to 0.00307, saving model to LSTM2.h5
81/81 - 1s - loss: 0.0019 - accuracy: 0.0000e+00 - val_loss: 0.0031 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 795ms/epoch - 10ms/step
Epoch 19/500

Epoch 00019: val_loss improved from 0.00307 to 0.00258, saving model to LSTM2.h5
81/81 - 1s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0026 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 737ms/epoch - 9ms/step
Epoch 20/500

Epoch 00020: val_loss improved from 0.00258 to 0.00241, saving model to LSTM2.h5
81/81 - 1s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0024 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 725ms/epoch - 9ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.00241
81/81 - 1s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0025 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 699ms/epoch - 9ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.00241
81/81 - 1s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0028 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 714ms/epoch - 9ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.00241
81/81 - 1s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0033 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 710ms/epoch - 9ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.00241
81/81 - 1s - loss: 9.6716e-04 - accuracy: 0.0000e+00 - val_loss: 0.0039 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 704ms/epoch - 9ms/step
Epoch 25/500

Epoch 00025: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00025: val_loss did not improve from 0.00241
81/81 - 1s - loss: 9.4434e-04 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 714ms/epoch - 9ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.00241
81/81 - 1s - loss: 9.1698e-04 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 698ms/epoch - 9ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.00241
81/81 - 1s - loss: 9.0828e-04 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 724ms/epoch - 9ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.00241
81/81 - 1s - loss: 9.0429e-04 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 737ms/epoch - 9ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.00241
81/81 - 1s - loss: 9.0209e-04 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 739ms/epoch - 9ms/step
Epoch 30/500

Epoch 00030: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00030: val_loss did not improve from 0.00241
81/81 - 1s - loss: 9.0056e-04 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 708ms/epoch - 9ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.00241
81/81 - 1s - loss: 8.9929e-04 - accuracy: 0.0000e+00 - val_loss: 0.0046 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 661ms/epoch - 8ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.00241
81/81 - 1s - loss: 8.9813e-04 - accuracy: 0.0000e+00 - val_loss: 0.0046 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 676ms/epoch - 8ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.00241
81/81 - 1s - loss: 8.9702e-04 - accuracy: 0.0000e+00 - val_loss: 0.0046 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 660ms/epoch - 8ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.00241
81/81 - 1s - loss: 8.9592e-04 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 665ms/epoch - 8ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.00241
81/81 - 1s - loss: 8.9481e-04 - accuracy: 0.0000e+00 - val_loss: 0.0048 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 674ms/epoch - 8ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.00241
81/81 - 1s - loss: 8.9370e-04 - accuracy: 0.0000e+00 - val_loss: 0.0048 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 731ms/epoch - 9ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.00241
81/81 - 1s - loss: 8.9258e-04 - accuracy: 0.0000e+00 - val_loss: 0.0049 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 676ms/epoch - 8ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.00241
81/81 - 1s - loss: 8.9145e-04 - accuracy: 0.0000e+00 - val_loss: 0.0049 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 679ms/epoch - 8ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.00241
81/81 - 1s - loss: 8.9030e-04 - accuracy: 0.0000e+00 - val_loss: 0.0050 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 681ms/epoch - 8ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.00241
81/81 - 1s - loss: 8.8913e-04 - accuracy: 0.0000e+00 - val_loss: 0.0051 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 675ms/epoch - 8ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.00241
81/81 - 1s - loss: 8.8795e-04 - accuracy: 0.0000e+00 - val_loss: 0.0051 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 688ms/epoch - 8ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.00241
81/81 - 1s - loss: 8.8676e-04 - accuracy: 0.0000e+00 - val_loss: 0.0052 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 670ms/epoch - 8ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.00241
81/81 - 1s - loss: 8.8554e-04 - accuracy: 0.0000e+00 - val_loss: 0.0053 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 707ms/epoch - 9ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.00241
81/81 - 1s - loss: 8.8431e-04 - accuracy: 0.0000e+00 - val_loss: 0.0054 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 682ms/epoch - 8ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.00241
81/81 - 1s - loss: 8.8307e-04 - accuracy: 0.0000e+00 - val_loss: 0.0055 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 666ms/epoch - 8ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.00241
81/81 - 1s - loss: 8.8181e-04 - accuracy: 0.0000e+00 - val_loss: 0.0055 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 667ms/epoch - 8ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.00241
81/81 - 1s - loss: 8.8053e-04 - accuracy: 0.0000e+00 - val_loss: 0.0056 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 683ms/epoch - 8ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.00241
81/81 - 1s - loss: 8.7923e-04 - accuracy: 0.0000e+00 - val_loss: 0.0057 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 676ms/epoch - 8ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.00241
81/81 - 1s - loss: 8.7792e-04 - accuracy: 0.0000e+00 - val_loss: 0.0058 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 661ms/epoch - 8ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.00241
81/81 - 1s - loss: 8.7659e-04 - accuracy: 0.0000e+00 - val_loss: 0.0059 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 742ms/epoch - 9ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.00241
81/81 - 1s - loss: 8.7523e-04 - accuracy: 0.0000e+00 - val_loss: 0.0060 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 680ms/epoch - 8ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.00241
81/81 - 1s - loss: 8.7387e-04 - accuracy: 0.0000e+00 - val_loss: 0.0061 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 681ms/epoch - 8ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.00241
81/81 - 1s - loss: 8.7248e-04 - accuracy: 0.0000e+00 - val_loss: 0.0062 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 689ms/epoch - 9ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.00241
81/81 - 1s - loss: 8.7108e-04 - accuracy: 0.0000e+00 - val_loss: 0.0063 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 690ms/epoch - 9ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.00241
81/81 - 1s - loss: 8.6965e-04 - accuracy: 0.0000e+00 - val_loss: 0.0065 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 678ms/epoch - 8ms/step
Epoch 56/500

Epoch 00056: val_loss did not improve from 0.00241
81/81 - 1s - loss: 8.6821e-04 - accuracy: 0.0000e+00 - val_loss: 0.0066 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 662ms/epoch - 8ms/step
Epoch 57/500

Epoch 00057: val_loss did not improve from 0.00241
81/81 - 1s - loss: 8.6675e-04 - accuracy: 0.0000e+00 - val_loss: 0.0067 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 662ms/epoch - 8ms/step
Epoch 58/500

Epoch 00058: val_loss did not improve from 0.00241
81/81 - 1s - loss: 8.6526e-04 - accuracy: 0.0000e+00 - val_loss: 0.0068 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 677ms/epoch - 8ms/step
Epoch 59/500

Epoch 00059: val_loss did not improve from 0.00241
81/81 - 1s - loss: 8.6376e-04 - accuracy: 0.0000e+00 - val_loss: 0.0069 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 683ms/epoch - 8ms/step
Epoch 60/500

Epoch 00060: val_loss did not improve from 0.00241
81/81 - 1s - loss: 8.6224e-04 - accuracy: 0.0000e+00 - val_loss: 0.0070 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 687ms/epoch - 8ms/step
Epoch 61/500

Epoch 00061: val_loss did not improve from 0.00241
81/81 - 1s - loss: 8.6070e-04 - accuracy: 0.0000e+00 - val_loss: 0.0072 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 682ms/epoch - 8ms/step
Epoch 62/500

Epoch 00062: val_loss did not improve from 0.00241
81/81 - 1s - loss: 8.5914e-04 - accuracy: 0.0000e+00 - val_loss: 0.0073 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 677ms/epoch - 8ms/step
Epoch 63/500

Epoch 00063: val_loss did not improve from 0.00241
81/81 - 1s - loss: 8.5756e-04 - accuracy: 0.0000e+00 - val_loss: 0.0074 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 687ms/epoch - 8ms/step
Epoch 64/500

Epoch 00064: val_loss did not improve from 0.00241
81/81 - 1s - loss: 8.5596e-04 - accuracy: 0.0000e+00 - val_loss: 0.0076 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 732ms/epoch - 9ms/step
Epoch 65/500

Epoch 00065: val_loss did not improve from 0.00241
81/81 - 1s - loss: 8.5434e-04 - accuracy: 0.0000e+00 - val_loss: 0.0077 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 699ms/epoch - 9ms/step
Epoch 66/500

Epoch 00066: val_loss did not improve from 0.00241
81/81 - 1s - loss: 8.5269e-04 - accuracy: 0.0000e+00 - val_loss: 0.0078 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 746ms/epoch - 9ms/step
Epoch 67/500

Epoch 00067: val_loss did not improve from 0.00241
81/81 - 1s - loss: 8.5103e-04 - accuracy: 0.0000e+00 - val_loss: 0.0079 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 685ms/epoch - 8ms/step
Epoch 68/500

Epoch 00068: val_loss did not improve from 0.00241
81/81 - 1s - loss: 8.4934e-04 - accuracy: 0.0000e+00 - val_loss: 0.0081 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 680ms/epoch - 8ms/step
Epoch 69/500

Epoch 00069: val_loss did not improve from 0.00241
81/81 - 1s - loss: 8.4764e-04 - accuracy: 0.0000e+00 - val_loss: 0.0082 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 735ms/epoch - 9ms/step
Epoch 70/500

Epoch 00070: val_loss did not improve from 0.00241
81/81 - 1s - loss: 8.4591e-04 - accuracy: 0.0000e+00 - val_loss: 0.0083 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 728ms/epoch - 9ms/step
Epoch 00070: early stopping
SMA
Prediction vs Close:		52.61% Accuracy
Prediction vs Prediction:	46.27% Accuracy
MSE:	 136.8787627980392 
RMSE:	 11.699519767838302 
MAPE:	 9.782684450137419

EMA
Prediction vs Close:		52.24% Accuracy
Prediction vs Prediction:	44.78% Accuracy
MSE:	 325.2876788257106 
RMSE:	 18.035733387520192 
MAPE:	 15.58326491461623

WMA
Prediction vs Close:		54.85% Accuracy
Prediction vs Prediction:	43.28% Accuracy
MSE:	 142.26654445774975 
RMSE:	 11.927554001460221 
MAPE:	 9.787176512242624

DEMA
Prediction vs Close:		52.24% Accuracy
Prediction vs Prediction:	45.52% Accuracy
MSE:	 171.26734938505615 
RMSE:	 13.086915197442679 
MAPE:	 11.821213958102536

KAMA
Prediction vs Close:		52.24% Accuracy
Prediction vs Prediction:	47.01% Accuracy
MSE:	 52.24625253706662 
RMSE:	 7.228156925321048 
MAPE:	 5.787764162926424
MIDPOINT
MIDPOINT([input_arrays], [timeperiod=14])

MidPoint over period (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 14
Outputs:
    real
14

Working on MIDPOINT predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.50 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4212.289, Time=0.04 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3747.746, Time=0.07 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.33 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3523.401, Time=0.11 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3387.759, Time=0.12 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=1.61 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=1.18 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3389.758, Time=0.30 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 4.270 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1689.879
Date:                Sun, 12 Dec 2021   AIC                           3387.759
Time:                        12:30:07   BIC                           3406.522
Sample:                             0   HQIC                          3394.964
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1878      0.003   -345.315      0.000      -1.195      -1.181
ar.L2         -0.8876      0.007   -121.809      0.000      -0.902      -0.873
ar.L3         -0.3957      0.007    -60.127      0.000      -0.409      -0.383
sigma2         3.8904      0.020    193.404      0.000       3.851       3.930
===================================================================================
Ljung-Box (L1) (Q):                  13.21   Jarque-Bera (JB):           1659080.01
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.08   Skew:                             3.28
Prob(H) (two-sided):                  0.00   Kurtosis:                       225.31
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.07124, saving model to LSTM2.h5
81/81 - 6s - loss: 0.1063 - accuracy: 0.0000e+00 - val_loss: 0.0712 - val_accuracy: 0.0037 - lr: 0.0010 - 6s/epoch - 74ms/step
Epoch 2/500

Epoch 00002: val_loss improved from 0.07124 to 0.04279, saving model to LSTM2.h5
81/81 - 1s - loss: 0.0388 - accuracy: 0.0000e+00 - val_loss: 0.0428 - val_accuracy: 0.0037 - lr: 0.0010 - 748ms/epoch - 9ms/step
Epoch 3/500

Epoch 00003: val_loss improved from 0.04279 to 0.01224, saving model to LSTM2.h5
81/81 - 1s - loss: 0.0581 - accuracy: 0.0000e+00 - val_loss: 0.0122 - val_accuracy: 0.0037 - lr: 0.0010 - 747ms/epoch - 9ms/step
Epoch 4/500

Epoch 00004: val_loss improved from 0.01224 to 0.00545, saving model to LSTM2.h5
81/81 - 1s - loss: 0.0288 - accuracy: 0.0000e+00 - val_loss: 0.0054 - val_accuracy: 0.0037 - lr: 0.0010 - 727ms/epoch - 9ms/step
Epoch 5/500

Epoch 00005: val_loss improved from 0.00545 to 0.00465, saving model to LSTM2.h5
81/81 - 1s - loss: 0.0259 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 0.0010 - 738ms/epoch - 9ms/step
Epoch 6/500

Epoch 00006: val_loss improved from 0.00465 to 0.00417, saving model to LSTM2.h5
81/81 - 1s - loss: 0.0109 - accuracy: 0.0000e+00 - val_loss: 0.0042 - val_accuracy: 0.0037 - lr: 0.0010 - 739ms/epoch - 9ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.00417
81/81 - 1s - loss: 0.0063 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 0.0010 - 696ms/epoch - 9ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.00417
81/81 - 1s - loss: 0.0050 - accuracy: 0.0000e+00 - val_loss: 0.0054 - val_accuracy: 0.0037 - lr: 0.0010 - 671ms/epoch - 8ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.00417
81/81 - 1s - loss: 0.0053 - accuracy: 0.0000e+00 - val_loss: 0.0064 - val_accuracy: 0.0037 - lr: 0.0010 - 698ms/epoch - 9ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.00417
81/81 - 1s - loss: 0.0066 - accuracy: 0.0000e+00 - val_loss: 0.0076 - val_accuracy: 0.0037 - lr: 0.0010 - 702ms/epoch - 9ms/step
Epoch 11/500

Epoch 00011: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00011: val_loss did not improve from 0.00417
81/81 - 1s - loss: 0.0089 - accuracy: 0.0000e+00 - val_loss: 0.0085 - val_accuracy: 0.0037 - lr: 0.0010 - 723ms/epoch - 9ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.00417
81/81 - 1s - loss: 0.0169 - accuracy: 0.0000e+00 - val_loss: 0.0060 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 694ms/epoch - 9ms/step
Epoch 13/500

Epoch 00013: val_loss improved from 0.00417 to 0.00384, saving model to LSTM2.h5
81/81 - 1s - loss: 0.0041 - accuracy: 0.0000e+00 - val_loss: 0.0038 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 755ms/epoch - 9ms/step
Epoch 14/500

Epoch 00014: val_loss improved from 0.00384 to 0.00340, saving model to LSTM2.h5
81/81 - 1s - loss: 0.0029 - accuracy: 0.0000e+00 - val_loss: 0.0034 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 728ms/epoch - 9ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.00340
81/81 - 1s - loss: 0.0021 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 760ms/epoch - 9ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.00340
81/81 - 1s - loss: 0.0016 - accuracy: 0.0000e+00 - val_loss: 0.0043 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 695ms/epoch - 9ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.00340
81/81 - 1s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0052 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 702ms/epoch - 9ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.00340
81/81 - 1s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0061 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 691ms/epoch - 9ms/step
Epoch 19/500

Epoch 00019: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00019: val_loss did not improve from 0.00340
81/81 - 1s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0070 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 702ms/epoch - 9ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.00340
81/81 - 1s - loss: 9.6145e-04 - accuracy: 0.0000e+00 - val_loss: 0.0072 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 672ms/epoch - 8ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.00340
81/81 - 1s - loss: 9.5517e-04 - accuracy: 0.0000e+00 - val_loss: 0.0073 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 692ms/epoch - 9ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.00340
81/81 - 1s - loss: 9.5095e-04 - accuracy: 0.0000e+00 - val_loss: 0.0074 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 748ms/epoch - 9ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.00340
81/81 - 1s - loss: 9.4738e-04 - accuracy: 0.0000e+00 - val_loss: 0.0074 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 717ms/epoch - 9ms/step
Epoch 24/500

Epoch 00024: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00024: val_loss did not improve from 0.00340
81/81 - 1s - loss: 9.4399e-04 - accuracy: 0.0000e+00 - val_loss: 0.0075 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 698ms/epoch - 9ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.00340
81/81 - 1s - loss: 9.4064e-04 - accuracy: 0.0000e+00 - val_loss: 0.0076 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 720ms/epoch - 9ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.00340
81/81 - 1s - loss: 9.3730e-04 - accuracy: 0.0000e+00 - val_loss: 0.0077 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 749ms/epoch - 9ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.00340
81/81 - 1s - loss: 9.3396e-04 - accuracy: 0.0000e+00 - val_loss: 0.0078 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 674ms/epoch - 8ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.00340
81/81 - 1s - loss: 9.3063e-04 - accuracy: 0.0000e+00 - val_loss: 0.0079 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 708ms/epoch - 9ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.00340
81/81 - 1s - loss: 9.2732e-04 - accuracy: 0.0000e+00 - val_loss: 0.0080 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 728ms/epoch - 9ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.00340
81/81 - 1s - loss: 9.2402e-04 - accuracy: 0.0000e+00 - val_loss: 0.0080 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 702ms/epoch - 9ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.00340
81/81 - 1s - loss: 9.2074e-04 - accuracy: 0.0000e+00 - val_loss: 0.0081 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 755ms/epoch - 9ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.00340
81/81 - 1s - loss: 9.1749e-04 - accuracy: 0.0000e+00 - val_loss: 0.0082 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 705ms/epoch - 9ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.00340
81/81 - 1s - loss: 9.1427e-04 - accuracy: 0.0000e+00 - val_loss: 0.0083 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 716ms/epoch - 9ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.00340
81/81 - 1s - loss: 9.1108e-04 - accuracy: 0.0000e+00 - val_loss: 0.0084 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 704ms/epoch - 9ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.00340
81/81 - 1s - loss: 9.0793e-04 - accuracy: 0.0000e+00 - val_loss: 0.0086 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 726ms/epoch - 9ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.00340
81/81 - 1s - loss: 9.0480e-04 - accuracy: 0.0000e+00 - val_loss: 0.0087 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 722ms/epoch - 9ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.00340
81/81 - 1s - loss: 9.0171e-04 - accuracy: 0.0000e+00 - val_loss: 0.0088 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 713ms/epoch - 9ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.00340
81/81 - 1s - loss: 8.9866e-04 - accuracy: 0.0000e+00 - val_loss: 0.0089 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 682ms/epoch - 8ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.00340
81/81 - 1s - loss: 8.9564e-04 - accuracy: 0.0000e+00 - val_loss: 0.0090 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 727ms/epoch - 9ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.00340
81/81 - 1s - loss: 8.9266e-04 - accuracy: 0.0000e+00 - val_loss: 0.0091 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 700ms/epoch - 9ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.00340
81/81 - 1s - loss: 8.8971e-04 - accuracy: 0.0000e+00 - val_loss: 0.0092 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 718ms/epoch - 9ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.00340
81/81 - 1s - loss: 8.8680e-04 - accuracy: 0.0000e+00 - val_loss: 0.0093 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 732ms/epoch - 9ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.00340
81/81 - 1s - loss: 8.8393e-04 - accuracy: 0.0000e+00 - val_loss: 0.0094 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 697ms/epoch - 9ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.00340
81/81 - 1s - loss: 8.8108e-04 - accuracy: 0.0000e+00 - val_loss: 0.0096 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 730ms/epoch - 9ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.00340
81/81 - 1s - loss: 8.7827e-04 - accuracy: 0.0000e+00 - val_loss: 0.0097 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 757ms/epoch - 9ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.00340
81/81 - 1s - loss: 8.7549e-04 - accuracy: 0.0000e+00 - val_loss: 0.0098 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 713ms/epoch - 9ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.00340
81/81 - 1s - loss: 8.7274e-04 - accuracy: 0.0000e+00 - val_loss: 0.0099 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 705ms/epoch - 9ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.00340
81/81 - 1s - loss: 8.7003e-04 - accuracy: 0.0000e+00 - val_loss: 0.0100 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 705ms/epoch - 9ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.00340
81/81 - 1s - loss: 8.6734e-04 - accuracy: 0.0000e+00 - val_loss: 0.0101 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 694ms/epoch - 9ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.00340
81/81 - 1s - loss: 8.6467e-04 - accuracy: 0.0000e+00 - val_loss: 0.0102 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 712ms/epoch - 9ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.00340
81/81 - 1s - loss: 8.6204e-04 - accuracy: 0.0000e+00 - val_loss: 0.0103 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 714ms/epoch - 9ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.00340
81/81 - 1s - loss: 8.5942e-04 - accuracy: 0.0000e+00 - val_loss: 0.0104 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 702ms/epoch - 9ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.00340
81/81 - 1s - loss: 8.5684e-04 - accuracy: 0.0000e+00 - val_loss: 0.0105 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 720ms/epoch - 9ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.00340
81/81 - 1s - loss: 8.5427e-04 - accuracy: 0.0000e+00 - val_loss: 0.0106 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 707ms/epoch - 9ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.00340
81/81 - 1s - loss: 8.5172e-04 - accuracy: 0.0000e+00 - val_loss: 0.0107 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 692ms/epoch - 9ms/step
Epoch 56/500

Epoch 00056: val_loss did not improve from 0.00340
81/81 - 1s - loss: 8.4920e-04 - accuracy: 0.0000e+00 - val_loss: 0.0108 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 714ms/epoch - 9ms/step
Epoch 57/500

Epoch 00057: val_loss did not improve from 0.00340
81/81 - 1s - loss: 8.4669e-04 - accuracy: 0.0000e+00 - val_loss: 0.0109 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 724ms/epoch - 9ms/step
Epoch 58/500

Epoch 00058: val_loss did not improve from 0.00340
81/81 - 1s - loss: 8.4421e-04 - accuracy: 0.0000e+00 - val_loss: 0.0109 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 719ms/epoch - 9ms/step
Epoch 59/500

Epoch 00059: val_loss did not improve from 0.00340
81/81 - 1s - loss: 8.4174e-04 - accuracy: 0.0000e+00 - val_loss: 0.0110 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 713ms/epoch - 9ms/step
Epoch 60/500

Epoch 00060: val_loss did not improve from 0.00340
81/81 - 1s - loss: 8.3928e-04 - accuracy: 0.0000e+00 - val_loss: 0.0111 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 706ms/epoch - 9ms/step
Epoch 61/500

Epoch 00061: val_loss did not improve from 0.00340
81/81 - 1s - loss: 8.3684e-04 - accuracy: 0.0000e+00 - val_loss: 0.0111 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 688ms/epoch - 8ms/step
Epoch 62/500

Epoch 00062: val_loss did not improve from 0.00340
81/81 - 1s - loss: 8.3442e-04 - accuracy: 0.0000e+00 - val_loss: 0.0112 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 688ms/epoch - 8ms/step
Epoch 63/500

Epoch 00063: val_loss did not improve from 0.00340
81/81 - 1s - loss: 8.3201e-04 - accuracy: 0.0000e+00 - val_loss: 0.0113 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 700ms/epoch - 9ms/step
Epoch 64/500

Epoch 00064: val_loss did not improve from 0.00340
81/81 - 1s - loss: 8.2960e-04 - accuracy: 0.0000e+00 - val_loss: 0.0113 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 700ms/epoch - 9ms/step
Epoch 00064: early stopping
SMA
Prediction vs Close:		52.61% Accuracy
Prediction vs Prediction:	46.27% Accuracy
MSE:	 136.8787627980392 
RMSE:	 11.699519767838302 
MAPE:	 9.782684450137419

EMA
Prediction vs Close:		52.24% Accuracy
Prediction vs Prediction:	44.78% Accuracy
MSE:	 325.2876788257106 
RMSE:	 18.035733387520192 
MAPE:	 15.58326491461623

WMA
Prediction vs Close:		54.85% Accuracy
Prediction vs Prediction:	43.28% Accuracy
MSE:	 142.26654445774975 
RMSE:	 11.927554001460221 
MAPE:	 9.787176512242624

DEMA
Prediction vs Close:		52.24% Accuracy
Prediction vs Prediction:	45.52% Accuracy
MSE:	 171.26734938505615 
RMSE:	 13.086915197442679 
MAPE:	 11.821213958102536

KAMA
Prediction vs Close:		52.24% Accuracy
Prediction vs Prediction:	47.01% Accuracy
MSE:	 52.24625253706662 
RMSE:	 7.228156925321048 
MAPE:	 5.787764162926424

MIDPOINT
Prediction vs Close:		51.87% Accuracy
Prediction vs Prediction:	45.52% Accuracy
MSE:	 51.03620830548878 
RMSE:	 7.143963067197981 
MAPE:	 5.7689786092745114
T3
T3([input_arrays], [timeperiod=5], [vfactor=0.7])

Triple Exponential Moving Average (T3) (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 5
    vfactor: 0.7
Outputs:
    real
19

Working on T3 predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.54 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4414.515, Time=0.04 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3944.062, Time=0.06 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.51 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3715.173, Time=0.09 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3577.471, Time=0.11 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=1.86 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=0.81 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3579.471, Time=0.25 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 4.258 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1784.736
Date:                Sun, 12 Dec 2021   AIC                           3577.471
Time:                        12:32:30   BIC                           3596.235
Sample:                             0   HQIC                          3584.677
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1982      0.003   -389.844      0.000      -1.204      -1.192
ar.L2         -0.8974      0.006   -139.861      0.000      -0.910      -0.885
ar.L3         -0.3983      0.006    -68.862      0.000      -0.410      -0.387
sigma2         4.9242      0.023    215.469      0.000       4.879       4.969
===================================================================================
Ljung-Box (L1) (Q):                  14.55   Jarque-Bera (JB):           2468024.38
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                             3.90
Prob(H) (two-sided):                  0.00   Kurtosis:                       274.15
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.06158, saving model to LSTM2.h5
81/81 - 6s - loss: 0.1058 - accuracy: 0.0000e+00 - val_loss: 0.0616 - val_accuracy: 0.0037 - lr: 0.0010 - 6s/epoch - 71ms/step
Epoch 2/500

Epoch 00002: val_loss improved from 0.06158 to 0.04372, saving model to LSTM2.h5
81/81 - 1s - loss: 0.0206 - accuracy: 0.0000e+00 - val_loss: 0.0437 - val_accuracy: 0.0037 - lr: 0.0010 - 795ms/epoch - 10ms/step
Epoch 3/500

Epoch 00003: val_loss improved from 0.04372 to 0.01454, saving model to LSTM2.h5
81/81 - 1s - loss: 0.0351 - accuracy: 0.0000e+00 - val_loss: 0.0145 - val_accuracy: 0.0037 - lr: 0.0010 - 728ms/epoch - 9ms/step
Epoch 4/500

Epoch 00004: val_loss improved from 0.01454 to 0.00900, saving model to LSTM2.h5
81/81 - 1s - loss: 0.0366 - accuracy: 0.0000e+00 - val_loss: 0.0090 - val_accuracy: 0.0037 - lr: 0.0010 - 785ms/epoch - 10ms/step
Epoch 5/500

Epoch 00005: val_loss improved from 0.00900 to 0.00659, saving model to LSTM2.h5
81/81 - 1s - loss: 0.0315 - accuracy: 0.0000e+00 - val_loss: 0.0066 - val_accuracy: 0.0037 - lr: 0.0010 - 741ms/epoch - 9ms/step
Epoch 6/500

Epoch 00006: val_loss improved from 0.00659 to 0.00451, saving model to LSTM2.h5
81/81 - 1s - loss: 0.0242 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 0.0010 - 715ms/epoch - 9ms/step
Epoch 7/500

Epoch 00007: val_loss improved from 0.00451 to 0.00359, saving model to LSTM2.h5
81/81 - 1s - loss: 0.0152 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 0.0010 - 735ms/epoch - 9ms/step
Epoch 8/500

Epoch 00008: val_loss improved from 0.00359 to 0.00329, saving model to LSTM2.h5
81/81 - 1s - loss: 0.0104 - accuracy: 0.0000e+00 - val_loss: 0.0033 - val_accuracy: 0.0037 - lr: 0.0010 - 781ms/epoch - 10ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.00329
81/81 - 1s - loss: 0.0085 - accuracy: 0.0000e+00 - val_loss: 0.0035 - val_accuracy: 0.0037 - lr: 0.0010 - 684ms/epoch - 8ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.00329
81/81 - 1s - loss: 0.0086 - accuracy: 0.0000e+00 - val_loss: 0.0038 - val_accuracy: 0.0037 - lr: 0.0010 - 705ms/epoch - 9ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.00329
81/81 - 1s - loss: 0.0097 - accuracy: 0.0000e+00 - val_loss: 0.0054 - val_accuracy: 0.0037 - lr: 0.0010 - 702ms/epoch - 9ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.00329
81/81 - 1s - loss: 0.0117 - accuracy: 0.0000e+00 - val_loss: 0.0053 - val_accuracy: 0.0037 - lr: 0.0010 - 708ms/epoch - 9ms/step
Epoch 13/500

Epoch 00013: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00013: val_loss did not improve from 0.00329
81/81 - 1s - loss: 0.0130 - accuracy: 0.0000e+00 - val_loss: 0.0076 - val_accuracy: 0.0037 - lr: 0.0010 - 697ms/epoch - 9ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.00329
81/81 - 1s - loss: 0.0180 - accuracy: 0.0000e+00 - val_loss: 0.0052 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 741ms/epoch - 9ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.00329
81/81 - 1s - loss: 0.0043 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 702ms/epoch - 9ms/step
Epoch 16/500

Epoch 00016: val_loss improved from 0.00329 to 0.00301, saving model to LSTM2.h5
81/81 - 1s - loss: 0.0030 - accuracy: 0.0000e+00 - val_loss: 0.0030 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 757ms/epoch - 9ms/step
Epoch 17/500

Epoch 00017: val_loss improved from 0.00301 to 0.00264, saving model to LSTM2.h5
81/81 - 1s - loss: 0.0021 - accuracy: 0.0000e+00 - val_loss: 0.0026 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 707ms/epoch - 9ms/step
Epoch 18/500

Epoch 00018: val_loss improved from 0.00264 to 0.00248, saving model to LSTM2.h5
81/81 - 1s - loss: 0.0016 - accuracy: 0.0000e+00 - val_loss: 0.0025 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 789ms/epoch - 10ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.00248
81/81 - 1s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0025 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 713ms/epoch - 9ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.00248
81/81 - 1s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0027 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 682ms/epoch - 8ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.00248
81/81 - 1s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0031 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 744ms/epoch - 9ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.00248
81/81 - 1s - loss: 9.3281e-04 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 711ms/epoch - 9ms/step
Epoch 23/500

Epoch 00023: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00023: val_loss did not improve from 0.00248
81/81 - 1s - loss: 8.9032e-04 - accuracy: 0.0000e+00 - val_loss: 0.0041 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 693ms/epoch - 9ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.00248
81/81 - 1s - loss: 8.4976e-04 - accuracy: 0.0000e+00 - val_loss: 0.0041 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 748ms/epoch - 9ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.00248
81/81 - 1s - loss: 8.4727e-04 - accuracy: 0.0000e+00 - val_loss: 0.0042 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 683ms/epoch - 8ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.00248
81/81 - 1s - loss: 8.4534e-04 - accuracy: 0.0000e+00 - val_loss: 0.0042 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 686ms/epoch - 8ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.00248
81/81 - 1s - loss: 8.4359e-04 - accuracy: 0.0000e+00 - val_loss: 0.0042 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 693ms/epoch - 9ms/step
Epoch 28/500

Epoch 00028: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00028: val_loss did not improve from 0.00248
81/81 - 1s - loss: 8.4192e-04 - accuracy: 0.0000e+00 - val_loss: 0.0043 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 690ms/epoch - 9ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.00248
81/81 - 1s - loss: 8.4028e-04 - accuracy: 0.0000e+00 - val_loss: 0.0043 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 725ms/epoch - 9ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.00248
81/81 - 1s - loss: 8.3865e-04 - accuracy: 0.0000e+00 - val_loss: 0.0044 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 704ms/epoch - 9ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.00248
81/81 - 1s - loss: 8.3703e-04 - accuracy: 0.0000e+00 - val_loss: 0.0044 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 713ms/epoch - 9ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.00248
81/81 - 1s - loss: 8.3541e-04 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 686ms/epoch - 8ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.00248
81/81 - 1s - loss: 8.3379e-04 - accuracy: 0.0000e+00 - val_loss: 0.0046 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 679ms/epoch - 8ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.00248
81/81 - 1s - loss: 8.3216e-04 - accuracy: 0.0000e+00 - val_loss: 0.0046 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 692ms/epoch - 9ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.00248
81/81 - 1s - loss: 8.3053e-04 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 753ms/epoch - 9ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.00248
81/81 - 1s - loss: 8.2890e-04 - accuracy: 0.0000e+00 - val_loss: 0.0048 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 711ms/epoch - 9ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.00248
81/81 - 1s - loss: 8.2727e-04 - accuracy: 0.0000e+00 - val_loss: 0.0049 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 714ms/epoch - 9ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.00248
81/81 - 1s - loss: 8.2564e-04 - accuracy: 0.0000e+00 - val_loss: 0.0049 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 770ms/epoch - 10ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.00248
81/81 - 1s - loss: 8.2401e-04 - accuracy: 0.0000e+00 - val_loss: 0.0050 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 695ms/epoch - 9ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.00248
81/81 - 1s - loss: 8.2238e-04 - accuracy: 0.0000e+00 - val_loss: 0.0051 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 691ms/epoch - 9ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.00248
81/81 - 1s - loss: 8.2075e-04 - accuracy: 0.0000e+00 - val_loss: 0.0052 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 696ms/epoch - 9ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.00248
81/81 - 1s - loss: 8.1911e-04 - accuracy: 0.0000e+00 - val_loss: 0.0053 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 710ms/epoch - 9ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.00248
81/81 - 1s - loss: 8.1749e-04 - accuracy: 0.0000e+00 - val_loss: 0.0054 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 712ms/epoch - 9ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.00248
81/81 - 1s - loss: 8.1586e-04 - accuracy: 0.0000e+00 - val_loss: 0.0055 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 702ms/epoch - 9ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.00248
81/81 - 1s - loss: 8.1423e-04 - accuracy: 0.0000e+00 - val_loss: 0.0055 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 704ms/epoch - 9ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.00248
81/81 - 1s - loss: 8.1260e-04 - accuracy: 0.0000e+00 - val_loss: 0.0056 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 698ms/epoch - 9ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.00248
81/81 - 1s - loss: 8.1097e-04 - accuracy: 0.0000e+00 - val_loss: 0.0057 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 701ms/epoch - 9ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.00248
81/81 - 1s - loss: 8.0933e-04 - accuracy: 0.0000e+00 - val_loss: 0.0058 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 744ms/epoch - 9ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.00248
81/81 - 1s - loss: 8.0770e-04 - accuracy: 0.0000e+00 - val_loss: 0.0059 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 695ms/epoch - 9ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.00248
81/81 - 1s - loss: 8.0606e-04 - accuracy: 0.0000e+00 - val_loss: 0.0060 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 724ms/epoch - 9ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.00248
81/81 - 1s - loss: 8.0442e-04 - accuracy: 0.0000e+00 - val_loss: 0.0062 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 681ms/epoch - 8ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.00248
81/81 - 1s - loss: 8.0277e-04 - accuracy: 0.0000e+00 - val_loss: 0.0063 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 682ms/epoch - 8ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.00248
81/81 - 1s - loss: 8.0112e-04 - accuracy: 0.0000e+00 - val_loss: 0.0064 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 703ms/epoch - 9ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.00248
81/81 - 1s - loss: 7.9946e-04 - accuracy: 0.0000e+00 - val_loss: 0.0065 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 709ms/epoch - 9ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.00248
81/81 - 1s - loss: 7.9780e-04 - accuracy: 0.0000e+00 - val_loss: 0.0066 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 702ms/epoch - 9ms/step
Epoch 56/500

Epoch 00056: val_loss did not improve from 0.00248
81/81 - 1s - loss: 7.9612e-04 - accuracy: 0.0000e+00 - val_loss: 0.0067 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 713ms/epoch - 9ms/step
Epoch 57/500

Epoch 00057: val_loss did not improve from 0.00248
81/81 - 1s - loss: 7.9444e-04 - accuracy: 0.0000e+00 - val_loss: 0.0068 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 687ms/epoch - 8ms/step
Epoch 58/500

Epoch 00058: val_loss did not improve from 0.00248
81/81 - 1s - loss: 7.9274e-04 - accuracy: 0.0000e+00 - val_loss: 0.0069 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 714ms/epoch - 9ms/step
Epoch 59/500

Epoch 00059: val_loss did not improve from 0.00248
81/81 - 1s - loss: 7.9104e-04 - accuracy: 0.0000e+00 - val_loss: 0.0070 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 717ms/epoch - 9ms/step
Epoch 60/500

Epoch 00060: val_loss did not improve from 0.00248
81/81 - 1s - loss: 7.8932e-04 - accuracy: 0.0000e+00 - val_loss: 0.0071 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 708ms/epoch - 9ms/step
Epoch 61/500

Epoch 00061: val_loss did not improve from 0.00248
81/81 - 1s - loss: 7.8759e-04 - accuracy: 0.0000e+00 - val_loss: 0.0072 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 703ms/epoch - 9ms/step
Epoch 62/500

Epoch 00062: val_loss did not improve from 0.00248
81/81 - 1s - loss: 7.8584e-04 - accuracy: 0.0000e+00 - val_loss: 0.0073 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 713ms/epoch - 9ms/step
Epoch 63/500

Epoch 00063: val_loss did not improve from 0.00248
81/81 - 1s - loss: 7.8408e-04 - accuracy: 0.0000e+00 - val_loss: 0.0074 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 710ms/epoch - 9ms/step
Epoch 64/500

Epoch 00064: val_loss did not improve from 0.00248
81/81 - 1s - loss: 7.8231e-04 - accuracy: 0.0000e+00 - val_loss: 0.0075 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 686ms/epoch - 8ms/step
Epoch 65/500

Epoch 00065: val_loss did not improve from 0.00248
81/81 - 1s - loss: 7.8051e-04 - accuracy: 0.0000e+00 - val_loss: 0.0076 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 699ms/epoch - 9ms/step
Epoch 66/500

Epoch 00066: val_loss did not improve from 0.00248
81/81 - 1s - loss: 7.7870e-04 - accuracy: 0.0000e+00 - val_loss: 0.0077 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 694ms/epoch - 9ms/step
Epoch 67/500

Epoch 00067: val_loss did not improve from 0.00248
81/81 - 1s - loss: 7.7688e-04 - accuracy: 0.0000e+00 - val_loss: 0.0078 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 695ms/epoch - 9ms/step
Epoch 68/500

Epoch 00068: val_loss did not improve from 0.00248
81/81 - 1s - loss: 7.7503e-04 - accuracy: 0.0000e+00 - val_loss: 0.0079 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 706ms/epoch - 9ms/step
Epoch 00068: early stopping
SMA
Prediction vs Close:		52.61% Accuracy
Prediction vs Prediction:	46.27% Accuracy
MSE:	 136.8787627980392 
RMSE:	 11.699519767838302 
MAPE:	 9.782684450137419

EMA
Prediction vs Close:		52.24% Accuracy
Prediction vs Prediction:	44.78% Accuracy
MSE:	 325.2876788257106 
RMSE:	 18.035733387520192 
MAPE:	 15.58326491461623

WMA
Prediction vs Close:		54.85% Accuracy
Prediction vs Prediction:	43.28% Accuracy
MSE:	 142.26654445774975 
RMSE:	 11.927554001460221 
MAPE:	 9.787176512242624

DEMA
Prediction vs Close:		52.24% Accuracy
Prediction vs Prediction:	45.52% Accuracy
MSE:	 171.26734938505615 
RMSE:	 13.086915197442679 
MAPE:	 11.821213958102536

KAMA
Prediction vs Close:		52.24% Accuracy
Prediction vs Prediction:	47.01% Accuracy
MSE:	 52.24625253706662 
RMSE:	 7.228156925321048 
MAPE:	 5.787764162926424

MIDPOINT
Prediction vs Close:		51.87% Accuracy
Prediction vs Prediction:	45.52% Accuracy
MSE:	 51.03620830548878 
RMSE:	 7.143963067197981 
MAPE:	 5.7689786092745114

T3
Prediction vs Close:		54.48% Accuracy
Prediction vs Prediction:	48.13% Accuracy
MSE:	 131.82692716984482 
RMSE:	 11.481590794391028 
MAPE:	 9.148826908925223
TEMA
TEMA([input_arrays], [timeperiod=30])

Triple Exponential Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
9

Working on TEMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.66 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4352.703, Time=0.04 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3889.412, Time=0.06 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.38 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3689.930, Time=0.08 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3574.245, Time=0.11 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=1.50 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=1.04 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3576.245, Time=0.28 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 4.165 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1783.123
Date:                Sun, 12 Dec 2021   AIC                           3574.245
Time:                        12:34:50   BIC                           3593.008
Sample:                             0   HQIC                          3581.451
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1480      0.004   -302.430      0.000      -1.155      -1.141
ar.L2         -0.8300      0.008    -99.682      0.000      -0.846      -0.814
ar.L3         -0.3687      0.007    -50.527      0.000      -0.383      -0.354
sigma2         4.9055      0.028    175.970      0.000       4.851       4.960
===================================================================================
Ljung-Box (L1) (Q):                  11.61   Jarque-Bera (JB):           1261976.58
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.16   Skew:                             2.52
Prob(H) (two-sided):                  0.00   Kurtosis:                       196.90
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.09338, saving model to LSTM2.h5
81/81 - 6s - loss: 0.0863 - accuracy: 0.0000e+00 - val_loss: 0.0934 - val_accuracy: 0.0037 - lr: 0.0010 - 6s/epoch - 77ms/step
Epoch 2/500

Epoch 00002: val_loss improved from 0.09338 to 0.01889, saving model to LSTM2.h5
81/81 - 1s - loss: 0.0719 - accuracy: 0.0000e+00 - val_loss: 0.0189 - val_accuracy: 0.0037 - lr: 0.0010 - 715ms/epoch - 9ms/step
Epoch 3/500

Epoch 00003: val_loss improved from 0.01889 to 0.00405, saving model to LSTM2.h5
81/81 - 1s - loss: 0.0139 - accuracy: 0.0000e+00 - val_loss: 0.0040 - val_accuracy: 0.0037 - lr: 0.0010 - 728ms/epoch - 9ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.00405
81/81 - 1s - loss: 0.0188 - accuracy: 0.0000e+00 - val_loss: 0.0053 - val_accuracy: 0.0037 - lr: 0.0010 - 745ms/epoch - 9ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.00405
81/81 - 1s - loss: 0.0208 - accuracy: 0.0000e+00 - val_loss: 0.0041 - val_accuracy: 0.0037 - lr: 0.0010 - 694ms/epoch - 9ms/step
Epoch 6/500

Epoch 00006: val_loss did not improve from 0.00405
81/81 - 1s - loss: 0.0166 - accuracy: 0.0000e+00 - val_loss: 0.0049 - val_accuracy: 0.0037 - lr: 0.0010 - 676ms/epoch - 8ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.00405
81/81 - 1s - loss: 0.0150 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 0.0010 - 728ms/epoch - 9ms/step
Epoch 8/500

Epoch 00008: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00008: val_loss improved from 0.00405 to 0.00395, saving model to LSTM2.h5
81/81 - 1s - loss: 0.0148 - accuracy: 0.0000e+00 - val_loss: 0.0040 - val_accuracy: 0.0037 - lr: 0.0010 - 747ms/epoch - 9ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.00395
81/81 - 1s - loss: 0.0191 - accuracy: 0.0000e+00 - val_loss: 0.0226 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 674ms/epoch - 8ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.00395
81/81 - 1s - loss: 0.0039 - accuracy: 0.0000e+00 - val_loss: 0.0188 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 709ms/epoch - 9ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.00395
81/81 - 1s - loss: 0.0028 - accuracy: 0.0000e+00 - val_loss: 0.0158 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 694ms/epoch - 9ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.00395
81/81 - 1s - loss: 0.0020 - accuracy: 0.0000e+00 - val_loss: 0.0131 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 680ms/epoch - 8ms/step
Epoch 13/500

Epoch 00013: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00013: val_loss did not improve from 0.00395
81/81 - 1s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0107 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 693ms/epoch - 9ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.00395
81/81 - 1s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0107 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 681ms/epoch - 8ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.00395
81/81 - 1s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0107 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 709ms/epoch - 9ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.00395
81/81 - 1s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0106 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 676ms/epoch - 8ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.00395
81/81 - 1s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0105 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 732ms/epoch - 9ms/step
Epoch 18/500

Epoch 00018: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00018: val_loss did not improve from 0.00395
81/81 - 1s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0103 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 716ms/epoch - 9ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.00395
81/81 - 1s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0101 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 694ms/epoch - 9ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.00395
81/81 - 1s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0099 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 712ms/epoch - 9ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.00395
81/81 - 1s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0096 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 699ms/epoch - 9ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.00395
81/81 - 1s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0094 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 731ms/epoch - 9ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.00395
81/81 - 1s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0091 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 711ms/epoch - 9ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.00395
81/81 - 1s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0089 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 699ms/epoch - 9ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.00395
81/81 - 1s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0086 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 698ms/epoch - 9ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.00395
81/81 - 1s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0084 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 693ms/epoch - 9ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.00395
81/81 - 1s - loss: 9.9689e-04 - accuracy: 0.0000e+00 - val_loss: 0.0081 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 706ms/epoch - 9ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.00395
81/81 - 1s - loss: 9.8528e-04 - accuracy: 0.0000e+00 - val_loss: 0.0079 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 748ms/epoch - 9ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.00395
81/81 - 1s - loss: 9.7432e-04 - accuracy: 0.0000e+00 - val_loss: 0.0076 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 692ms/epoch - 9ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.00395
81/81 - 1s - loss: 9.6402e-04 - accuracy: 0.0000e+00 - val_loss: 0.0074 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 702ms/epoch - 9ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.00395
81/81 - 1s - loss: 9.5435e-04 - accuracy: 0.0000e+00 - val_loss: 0.0071 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 754ms/epoch - 9ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.00395
81/81 - 1s - loss: 9.4532e-04 - accuracy: 0.0000e+00 - val_loss: 0.0069 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 753ms/epoch - 9ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.00395
81/81 - 1s - loss: 9.3690e-04 - accuracy: 0.0000e+00 - val_loss: 0.0066 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 725ms/epoch - 9ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.00395
81/81 - 1s - loss: 9.2908e-04 - accuracy: 0.0000e+00 - val_loss: 0.0064 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 712ms/epoch - 9ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.00395
81/81 - 1s - loss: 9.2183e-04 - accuracy: 0.0000e+00 - val_loss: 0.0061 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 718ms/epoch - 9ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.00395
81/81 - 1s - loss: 9.1511e-04 - accuracy: 0.0000e+00 - val_loss: 0.0059 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 706ms/epoch - 9ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.00395
81/81 - 1s - loss: 9.0890e-04 - accuracy: 0.0000e+00 - val_loss: 0.0057 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 698ms/epoch - 9ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.00395
81/81 - 1s - loss: 9.0317e-04 - accuracy: 0.0000e+00 - val_loss: 0.0055 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 706ms/epoch - 9ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.00395
81/81 - 1s - loss: 8.9787e-04 - accuracy: 0.0000e+00 - val_loss: 0.0053 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 684ms/epoch - 8ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.00395
81/81 - 1s - loss: 8.9298e-04 - accuracy: 0.0000e+00 - val_loss: 0.0051 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 700ms/epoch - 9ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.00395
81/81 - 1s - loss: 8.8845e-04 - accuracy: 0.0000e+00 - val_loss: 0.0049 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 706ms/epoch - 9ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.00395
81/81 - 1s - loss: 8.8424e-04 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 700ms/epoch - 9ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.00395
81/81 - 1s - loss: 8.8034e-04 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 707ms/epoch - 9ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.00395
81/81 - 1s - loss: 8.7669e-04 - accuracy: 0.0000e+00 - val_loss: 0.0043 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 706ms/epoch - 9ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.00395
81/81 - 1s - loss: 8.7328e-04 - accuracy: 0.0000e+00 - val_loss: 0.0042 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 729ms/epoch - 9ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.00395
81/81 - 1s - loss: 8.7006e-04 - accuracy: 0.0000e+00 - val_loss: 0.0040 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 703ms/epoch - 9ms/step
Epoch 47/500

Epoch 00047: val_loss improved from 0.00395 to 0.00387, saving model to LSTM2.h5
81/81 - 1s - loss: 8.6701e-04 - accuracy: 0.0000e+00 - val_loss: 0.0039 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 909ms/epoch - 11ms/step
Epoch 48/500

Epoch 00048: val_loss improved from 0.00387 to 0.00372, saving model to LSTM2.h5
81/81 - 1s - loss: 8.6411e-04 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 761ms/epoch - 9ms/step
Epoch 49/500

Epoch 00049: val_loss improved from 0.00372 to 0.00359, saving model to LSTM2.h5
81/81 - 1s - loss: 8.6133e-04 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 728ms/epoch - 9ms/step
Epoch 50/500

Epoch 00050: val_loss improved from 0.00359 to 0.00347, saving model to LSTM2.h5
81/81 - 1s - loss: 8.5865e-04 - accuracy: 0.0000e+00 - val_loss: 0.0035 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 716ms/epoch - 9ms/step
Epoch 51/500

Epoch 00051: val_loss improved from 0.00347 to 0.00335, saving model to LSTM2.h5
81/81 - 1s - loss: 8.5605e-04 - accuracy: 0.0000e+00 - val_loss: 0.0034 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 728ms/epoch - 9ms/step
Epoch 52/500

Epoch 00052: val_loss improved from 0.00335 to 0.00324, saving model to LSTM2.h5
81/81 - 1s - loss: 8.5352e-04 - accuracy: 0.0000e+00 - val_loss: 0.0032 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 734ms/epoch - 9ms/step
Epoch 53/500

Epoch 00053: val_loss improved from 0.00324 to 0.00314, saving model to LSTM2.h5
81/81 - 1s - loss: 8.5104e-04 - accuracy: 0.0000e+00 - val_loss: 0.0031 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 776ms/epoch - 10ms/step
Epoch 54/500

Epoch 00054: val_loss improved from 0.00314 to 0.00304, saving model to LSTM2.h5
81/81 - 1s - loss: 8.4859e-04 - accuracy: 0.0000e+00 - val_loss: 0.0030 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 722ms/epoch - 9ms/step
Epoch 55/500

Epoch 00055: val_loss improved from 0.00304 to 0.00296, saving model to LSTM2.h5
81/81 - 1s - loss: 8.4617e-04 - accuracy: 0.0000e+00 - val_loss: 0.0030 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 731ms/epoch - 9ms/step
Epoch 56/500

Epoch 00056: val_loss improved from 0.00296 to 0.00288, saving model to LSTM2.h5
81/81 - 1s - loss: 8.4377e-04 - accuracy: 0.0000e+00 - val_loss: 0.0029 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 746ms/epoch - 9ms/step
Epoch 57/500

Epoch 00057: val_loss improved from 0.00288 to 0.00280, saving model to LSTM2.h5
81/81 - 1s - loss: 8.4138e-04 - accuracy: 0.0000e+00 - val_loss: 0.0028 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 748ms/epoch - 9ms/step
Epoch 58/500

Epoch 00058: val_loss improved from 0.00280 to 0.00274, saving model to LSTM2.h5
81/81 - 1s - loss: 8.3899e-04 - accuracy: 0.0000e+00 - val_loss: 0.0027 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 727ms/epoch - 9ms/step
Epoch 59/500

Epoch 00059: val_loss improved from 0.00274 to 0.00268, saving model to LSTM2.h5
81/81 - 1s - loss: 8.3660e-04 - accuracy: 0.0000e+00 - val_loss: 0.0027 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 736ms/epoch - 9ms/step
Epoch 60/500

Epoch 00060: val_loss improved from 0.00268 to 0.00262, saving model to LSTM2.h5
81/81 - 1s - loss: 8.3421e-04 - accuracy: 0.0000e+00 - val_loss: 0.0026 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 769ms/epoch - 9ms/step
Epoch 61/500

Epoch 00061: val_loss improved from 0.00262 to 0.00258, saving model to LSTM2.h5
81/81 - 1s - loss: 8.3181e-04 - accuracy: 0.0000e+00 - val_loss: 0.0026 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 755ms/epoch - 9ms/step
Epoch 62/500

Epoch 00062: val_loss improved from 0.00258 to 0.00254, saving model to LSTM2.h5
81/81 - 1s - loss: 8.2940e-04 - accuracy: 0.0000e+00 - val_loss: 0.0025 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 736ms/epoch - 9ms/step
Epoch 63/500

Epoch 00063: val_loss improved from 0.00254 to 0.00250, saving model to LSTM2.h5
81/81 - 1s - loss: 8.2697e-04 - accuracy: 0.0000e+00 - val_loss: 0.0025 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 729ms/epoch - 9ms/step
Epoch 64/500

Epoch 00064: val_loss improved from 0.00250 to 0.00248, saving model to LSTM2.h5
81/81 - 1s - loss: 8.2454e-04 - accuracy: 0.0000e+00 - val_loss: 0.0025 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 783ms/epoch - 10ms/step
Epoch 65/500

Epoch 00065: val_loss improved from 0.00248 to 0.00246, saving model to LSTM2.h5
81/81 - 1s - loss: 8.2209e-04 - accuracy: 0.0000e+00 - val_loss: 0.0025 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 750ms/epoch - 9ms/step
Epoch 66/500

Epoch 00066: val_loss improved from 0.00246 to 0.00244, saving model to LSTM2.h5
81/81 - 1s - loss: 8.1963e-04 - accuracy: 0.0000e+00 - val_loss: 0.0024 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 707ms/epoch - 9ms/step
Epoch 67/500

Epoch 00067: val_loss improved from 0.00244 to 0.00243, saving model to LSTM2.h5
81/81 - 1s - loss: 8.1715e-04 - accuracy: 0.0000e+00 - val_loss: 0.0024 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 739ms/epoch - 9ms/step
Epoch 68/500

Epoch 00068: val_loss improved from 0.00243 to 0.00243, saving model to LSTM2.h5
81/81 - 1s - loss: 8.1466e-04 - accuracy: 0.0000e+00 - val_loss: 0.0024 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 735ms/epoch - 9ms/step
Epoch 69/500

Epoch 00069: val_loss did not improve from 0.00243
81/81 - 1s - loss: 8.1216e-04 - accuracy: 0.0000e+00 - val_loss: 0.0024 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 693ms/epoch - 9ms/step
Epoch 70/500

Epoch 00070: val_loss did not improve from 0.00243
81/81 - 1s - loss: 8.0964e-04 - accuracy: 0.0000e+00 - val_loss: 0.0024 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 707ms/epoch - 9ms/step
Epoch 71/500

Epoch 00071: val_loss did not improve from 0.00243
81/81 - 1s - loss: 8.0712e-04 - accuracy: 0.0000e+00 - val_loss: 0.0025 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 711ms/epoch - 9ms/step
Epoch 72/500

Epoch 00072: val_loss did not improve from 0.00243
81/81 - 1s - loss: 8.0459e-04 - accuracy: 0.0000e+00 - val_loss: 0.0025 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 684ms/epoch - 8ms/step
Epoch 73/500

Epoch 00073: val_loss did not improve from 0.00243
81/81 - 1s - loss: 8.0204e-04 - accuracy: 0.0000e+00 - val_loss: 0.0025 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 695ms/epoch - 9ms/step
Epoch 74/500

Epoch 00074: val_loss did not improve from 0.00243
81/81 - 1s - loss: 7.9949e-04 - accuracy: 0.0000e+00 - val_loss: 0.0025 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 704ms/epoch - 9ms/step
Epoch 75/500

Epoch 00075: val_loss did not improve from 0.00243
81/81 - 1s - loss: 7.9693e-04 - accuracy: 0.0000e+00 - val_loss: 0.0026 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 718ms/epoch - 9ms/step
Epoch 76/500

Epoch 00076: val_loss did not improve from 0.00243
81/81 - 1s - loss: 7.9437e-04 - accuracy: 0.0000e+00 - val_loss: 0.0026 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 679ms/epoch - 8ms/step
Epoch 77/500

Epoch 00077: val_loss did not improve from 0.00243
81/81 - 1s - loss: 7.9180e-04 - accuracy: 0.0000e+00 - val_loss: 0.0026 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 691ms/epoch - 9ms/step
Epoch 78/500

Epoch 00078: val_loss did not improve from 0.00243
81/81 - 1s - loss: 7.8923e-04 - accuracy: 0.0000e+00 - val_loss: 0.0027 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 721ms/epoch - 9ms/step
Epoch 79/500

Epoch 00079: val_loss did not improve from 0.00243
81/81 - 1s - loss: 7.8666e-04 - accuracy: 0.0000e+00 - val_loss: 0.0027 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 695ms/epoch - 9ms/step
Epoch 80/500

Epoch 00080: val_loss did not improve from 0.00243
81/81 - 1s - loss: 7.8408e-04 - accuracy: 0.0000e+00 - val_loss: 0.0028 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 700ms/epoch - 9ms/step
Epoch 81/500

Epoch 00081: val_loss did not improve from 0.00243
81/81 - 1s - loss: 7.8150e-04 - accuracy: 0.0000e+00 - val_loss: 0.0029 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 703ms/epoch - 9ms/step
Epoch 82/500

Epoch 00082: val_loss did not improve from 0.00243
81/81 - 1s - loss: 7.7893e-04 - accuracy: 0.0000e+00 - val_loss: 0.0029 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 712ms/epoch - 9ms/step
Epoch 83/500

Epoch 00083: val_loss did not improve from 0.00243
81/81 - 1s - loss: 7.7635e-04 - accuracy: 0.0000e+00 - val_loss: 0.0030 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 714ms/epoch - 9ms/step
Epoch 84/500

Epoch 00084: val_loss did not improve from 0.00243
81/81 - 1s - loss: 7.7378e-04 - accuracy: 0.0000e+00 - val_loss: 0.0031 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 696ms/epoch - 9ms/step
Epoch 85/500

Epoch 00085: val_loss did not improve from 0.00243
81/81 - 1s - loss: 7.7121e-04 - accuracy: 0.0000e+00 - val_loss: 0.0031 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 701ms/epoch - 9ms/step
Epoch 86/500

Epoch 00086: val_loss did not improve from 0.00243
81/81 - 1s - loss: 7.6864e-04 - accuracy: 0.0000e+00 - val_loss: 0.0032 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 743ms/epoch - 9ms/step
Epoch 87/500

Epoch 00087: val_loss did not improve from 0.00243
81/81 - 1s - loss: 7.6608e-04 - accuracy: 0.0000e+00 - val_loss: 0.0033 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 716ms/epoch - 9ms/step
Epoch 88/500

Epoch 00088: val_loss did not improve from 0.00243
81/81 - 1s - loss: 7.6352e-04 - accuracy: 0.0000e+00 - val_loss: 0.0034 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 702ms/epoch - 9ms/step
Epoch 89/500

Epoch 00089: val_loss did not improve from 0.00243
81/81 - 1s - loss: 7.6097e-04 - accuracy: 0.0000e+00 - val_loss: 0.0034 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 694ms/epoch - 9ms/step
Epoch 90/500

Epoch 00090: val_loss did not improve from 0.00243
81/81 - 1s - loss: 7.5843e-04 - accuracy: 0.0000e+00 - val_loss: 0.0035 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 712ms/epoch - 9ms/step
Epoch 91/500

Epoch 00091: val_loss did not improve from 0.00243
81/81 - 1s - loss: 7.5589e-04 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 710ms/epoch - 9ms/step
Epoch 92/500

Epoch 00092: val_loss did not improve from 0.00243
81/81 - 1s - loss: 7.5336e-04 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 712ms/epoch - 9ms/step
Epoch 93/500

Epoch 00093: val_loss did not improve from 0.00243
81/81 - 1s - loss: 7.5084e-04 - accuracy: 0.0000e+00 - val_loss: 0.0038 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 706ms/epoch - 9ms/step
Epoch 94/500

Epoch 00094: val_loss did not improve from 0.00243
81/81 - 1s - loss: 7.4834e-04 - accuracy: 0.0000e+00 - val_loss: 0.0038 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 717ms/epoch - 9ms/step
Epoch 95/500

Epoch 00095: val_loss did not improve from 0.00243
81/81 - 1s - loss: 7.4584e-04 - accuracy: 0.0000e+00 - val_loss: 0.0039 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 689ms/epoch - 9ms/step
Epoch 96/500

Epoch 00096: val_loss did not improve from 0.00243
81/81 - 1s - loss: 7.4335e-04 - accuracy: 0.0000e+00 - val_loss: 0.0040 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 699ms/epoch - 9ms/step
Epoch 97/500

Epoch 00097: val_loss did not improve from 0.00243
81/81 - 1s - loss: 7.4088e-04 - accuracy: 0.0000e+00 - val_loss: 0.0041 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 688ms/epoch - 8ms/step
Epoch 98/500

Epoch 00098: val_loss did not improve from 0.00243
81/81 - 1s - loss: 7.3842e-04 - accuracy: 0.0000e+00 - val_loss: 0.0042 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 698ms/epoch - 9ms/step
Epoch 99/500

Epoch 00099: val_loss did not improve from 0.00243
81/81 - 1s - loss: 7.3597e-04 - accuracy: 0.0000e+00 - val_loss: 0.0043 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 686ms/epoch - 8ms/step
Epoch 100/500

Epoch 00100: val_loss did not improve from 0.00243
81/81 - 1s - loss: 7.3354e-04 - accuracy: 0.0000e+00 - val_loss: 0.0043 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 695ms/epoch - 9ms/step
Epoch 101/500

Epoch 00101: val_loss did not improve from 0.00243
81/81 - 1s - loss: 7.3112e-04 - accuracy: 0.0000e+00 - val_loss: 0.0044 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 711ms/epoch - 9ms/step
Epoch 102/500

Epoch 00102: val_loss did not improve from 0.00243
81/81 - 1s - loss: 7.2871e-04 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 714ms/epoch - 9ms/step
Epoch 103/500

Epoch 00103: val_loss did not improve from 0.00243
81/81 - 1s - loss: 7.2632e-04 - accuracy: 0.0000e+00 - val_loss: 0.0046 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 718ms/epoch - 9ms/step
Epoch 104/500

Epoch 00104: val_loss did not improve from 0.00243
81/81 - 1s - loss: 7.2395e-04 - accuracy: 0.0000e+00 - val_loss: 0.0046 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 744ms/epoch - 9ms/step
Epoch 105/500

Epoch 00105: val_loss did not improve from 0.00243
81/81 - 1s - loss: 7.2160e-04 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 741ms/epoch - 9ms/step
Epoch 106/500

Epoch 00106: val_loss did not improve from 0.00243
81/81 - 1s - loss: 7.1926e-04 - accuracy: 0.0000e+00 - val_loss: 0.0048 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 692ms/epoch - 9ms/step
Epoch 107/500

Epoch 00107: val_loss did not improve from 0.00243
81/81 - 1s - loss: 7.1694e-04 - accuracy: 0.0000e+00 - val_loss: 0.0049 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 766ms/epoch - 9ms/step
Epoch 108/500

Epoch 00108: val_loss did not improve from 0.00243
81/81 - 1s - loss: 7.1464e-04 - accuracy: 0.0000e+00 - val_loss: 0.0049 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 728ms/epoch - 9ms/step
Epoch 109/500

Epoch 00109: val_loss did not improve from 0.00243
81/81 - 1s - loss: 7.1236e-04 - accuracy: 0.0000e+00 - val_loss: 0.0050 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 701ms/epoch - 9ms/step
Epoch 110/500

Epoch 00110: val_loss did not improve from 0.00243
81/81 - 1s - loss: 7.1010e-04 - accuracy: 0.0000e+00 - val_loss: 0.0050 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 741ms/epoch - 9ms/step
Epoch 111/500

Epoch 00111: val_loss did not improve from 0.00243
81/81 - 1s - loss: 7.0786e-04 - accuracy: 0.0000e+00 - val_loss: 0.0051 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 725ms/epoch - 9ms/step
Epoch 112/500

Epoch 00112: val_loss did not improve from 0.00243
81/81 - 1s - loss: 7.0564e-04 - accuracy: 0.0000e+00 - val_loss: 0.0052 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 704ms/epoch - 9ms/step
Epoch 113/500

Epoch 00113: val_loss did not improve from 0.00243
81/81 - 1s - loss: 7.0344e-04 - accuracy: 0.0000e+00 - val_loss: 0.0052 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 695ms/epoch - 9ms/step
Epoch 114/500

Epoch 00114: val_loss did not improve from 0.00243
81/81 - 1s - loss: 7.0126e-04 - accuracy: 0.0000e+00 - val_loss: 0.0053 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 705ms/epoch - 9ms/step
Epoch 115/500

Epoch 00115: val_loss did not improve from 0.00243
81/81 - 1s - loss: 6.9910e-04 - accuracy: 0.0000e+00 - val_loss: 0.0053 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 734ms/epoch - 9ms/step
Epoch 116/500

Epoch 00116: val_loss did not improve from 0.00243
81/81 - 1s - loss: 6.9696e-04 - accuracy: 0.0000e+00 - val_loss: 0.0054 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 719ms/epoch - 9ms/step
Epoch 117/500

Epoch 00117: val_loss did not improve from 0.00243
81/81 - 1s - loss: 6.9484e-04 - accuracy: 0.0000e+00 - val_loss: 0.0054 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 742ms/epoch - 9ms/step
Epoch 118/500

Epoch 00118: val_loss did not improve from 0.00243
81/81 - 1s - loss: 6.9275e-04 - accuracy: 0.0000e+00 - val_loss: 0.0055 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 749ms/epoch - 9ms/step
Epoch 00118: early stopping
SMA
Prediction vs Close:		52.61% Accuracy
Prediction vs Prediction:	46.27% Accuracy
MSE:	 136.8787627980392 
RMSE:	 11.699519767838302 
MAPE:	 9.782684450137419

EMA
Prediction vs Close:		52.24% Accuracy
Prediction vs Prediction:	44.78% Accuracy
MSE:	 325.2876788257106 
RMSE:	 18.035733387520192 
MAPE:	 15.58326491461623

WMA
Prediction vs Close:		54.85% Accuracy
Prediction vs Prediction:	43.28% Accuracy
MSE:	 142.26654445774975 
RMSE:	 11.927554001460221 
MAPE:	 9.787176512242624

DEMA
Prediction vs Close:		52.24% Accuracy
Prediction vs Prediction:	45.52% Accuracy
MSE:	 171.26734938505615 
RMSE:	 13.086915197442679 
MAPE:	 11.821213958102536

KAMA
Prediction vs Close:		52.24% Accuracy
Prediction vs Prediction:	47.01% Accuracy
MSE:	 52.24625253706662 
RMSE:	 7.228156925321048 
MAPE:	 5.787764162926424

MIDPOINT
Prediction vs Close:		51.87% Accuracy
Prediction vs Prediction:	45.52% Accuracy
MSE:	 51.03620830548878 
RMSE:	 7.143963067197981 
MAPE:	 5.7689786092745114

T3
Prediction vs Close:		54.48% Accuracy
Prediction vs Prediction:	48.13% Accuracy
MSE:	 131.82692716984482 
RMSE:	 11.481590794391028 
MAPE:	 9.148826908925223

TEMA
Prediction vs Close:		50.37% Accuracy
Prediction vs Prediction:	48.51% Accuracy
MSE:	 103.01748696218638 
RMSE:	 10.149753049320283 
MAPE:	 9.012271728014667
Runtime: mins: 20.660532839866665

Architecture Used

In [ ]:
from google.colab import files
import cv2
uploaded = files.upload()
Upload widget is only available when the cell has been executed in the current browser session. Please rerun this cell to enable.
Saving Experiment2.png to Experiment2 (1).png
In [ ]:
img = cv2.imread('Experiment2.png')
plt.figure(figsize=(20,10))
plt.axis("off")
plt.title('LSTM Architecture '+imgfile,fontsize=18)
plt.imshow(img)
Out[ ]:
<matplotlib.image.AxesImage at 0x7fdfb2c6f290>

Model Plots

In [73]:
with open('simulation2_data.json') as json_file:
    simulation2 = json.load(json_file)
fileimg = 'Experiment2'
In [74]:
for i in range(len(list(simulation2.keys()))):
  SIM = list(simulation2.keys())[i]
  plot_train(simulation2,SIM)
  plot_test(simulation2,SIM)
----- Train RMSE for SMA ----- 8.826942967936205
----- Train_MSE_LSTM for SMA ----- 77.91492215919841
----- Train MAE LSTM for SMA ----- 7.736753512883909
----- Test RMSE for SMA----- 11.699519767838302
----- Test_MSE_LSTM for SMA----- 136.8787627980392
----- Test_MAE_LSTM for SMA----- 9.782684450137419
----- Train RMSE for EMA ----- 9.86405135909782
----- Train_MSE_LSTM for EMA ----- 97.29950921491954
----- Train MAE LSTM for EMA ----- 8.73726280728392
----- Test RMSE for EMA----- 18.035733387520192
----- Test_MSE_LSTM for EMA----- 325.2876788257106
----- Test_MAE_LSTM for EMA----- 15.58326491461623
----- Train RMSE for WMA ----- 10.453164264775387
----- Train_MSE_LSTM for WMA ----- 109.26864314637714
----- Train MAE LSTM for WMA ----- 9.361118262029704
----- Test RMSE for WMA----- 11.927554001460221
----- Test_MSE_LSTM for WMA----- 142.26654445774975
----- Test_MAE_LSTM for WMA----- 9.787176512242624
----- Train RMSE for DEMA ----- 12.156210755746763
----- Train_MSE_LSTM for DEMA ----- 147.77345993813327
----- Train MAE LSTM for DEMA ----- 10.956398825846396
----- Test RMSE for DEMA----- 13.086915197442679
----- Test_MSE_LSTM for DEMA----- 171.26734938505615
----- Test_MAE_LSTM for DEMA----- 11.821213958102536
----- Train RMSE for KAMA ----- 10.497854644863434
----- Train_MSE_LSTM for KAMA ----- 110.20495214468077
----- Train MAE LSTM for KAMA ----- 9.473006183673434
----- Test RMSE for KAMA----- 7.228156925321048
----- Test_MSE_LSTM for KAMA----- 52.24625253706662
----- Test_MAE_LSTM for KAMA----- 5.787764162926424
----- Train RMSE for MIDPOINT ----- 9.46823580889301
----- Train_MSE_LSTM for MIDPOINT ----- 89.64748933280386
----- Train MAE LSTM for MIDPOINT ----- 8.431252318872415
----- Test RMSE for MIDPOINT----- 7.143963067197981
----- Test_MSE_LSTM for MIDPOINT----- 51.03620830548878
----- Test_MAE_LSTM for MIDPOINT----- 5.7689786092745114
----- Train RMSE for T3 ----- 12.040312964311324
----- Train_MSE_LSTM for T3 ----- 144.96913627856335
----- Train MAE LSTM for T3 ----- 10.855980292569482
----- Test RMSE for T3----- 11.481590794391028
----- Test_MSE_LSTM for T3----- 131.82692716984482
----- Test_MAE_LSTM for T3----- 9.148826908925223
----- Train RMSE for TEMA ----- 7.410384228353796
----- Train_MSE_LSTM for TEMA ----- 54.91379441183469
----- Train MAE LSTM for TEMA ----- 5.102082156767552
----- Test RMSE for TEMA----- 10.149753049320283
----- Test_MSE_LSTM for TEMA----- 103.01748696218638
----- Test_MAE_LSTM for TEMA----- 9.012271728014667

Univariate Arima Multistep MutiVariate LSTM Hybrid Model Experiment 3

In [ ]:
def get_lstm(data,original_data, train_len, test_len,img_file,ma ,lstm_len=3):
    # prepare train and test data
    X_value = pd.DataFrame(data.iloc[:, :])
    y_value = pd.DataFrame(data.iloc[:, 3])
    X_scaler = MinMaxScaler(feature_range=(-1, 1))
    y_scaler = MinMaxScaler(feature_range=(-1, 1))
    X_scaler.fit(X_value)
    y_scaler.fit(y_value)
    # Get data and check shape
    X, y, yc = get_X_y(X_scale_dataset, y_scale_dataset)#X will be of shape 224 X 3 X 21 (each 3 X 21 array will be 3 days' worth of data). yc will have the corresponding closing price value
    # pdb.set_trace()
    X_train, X_test, = split_train_test(X)
    y_train, y_test, = split_train_test(y)
    # yc_train, yc_test, = split_train_test(original_data)
    index_train, index_test, = predict_index(dataset_final, X_train, n_steps_in, n_steps_out)
    det = 20
    input_dim = X_train.shape[1]#3
    feature_size = X_train.shape[2]#24
    output_dim = y_train.shape[1]#1



    # # Option 1
    # # Set up & fit LSTM RNN
    # model = Sequential()
    # model.add(LSTM(256, activation='relu', kernel_initializer='he_normal', input_shape=(input_dim, feature_size)))
    # model.add(Dense(units=64,activation='relu'))
    # model.add(Dropout(0.5))
    # model.add(Dense(units=output_dim))
    # model.compile(optimizer=Adam(learning_rate = 0.001), loss='mse')

    # ## Common code
    # callbacks = [
    # EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    # ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    # ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    # fname1 = img_file+'.png'
    # tensorflow.keras.utils.plot_model(
    #     model, to_file=fname1, show_shapes=True, show_dtype=False,
    #     show_layer_names=True, expand_nested=False, dpi=96,
    #     layer_range=None, show_layer_activations=False
    # )
    # history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # # plot loss
    # fname2 = img_file+'-'+ma
    # plt.title(img_file+'-'+ma_' Loss')
    # plt.xlabel("Epochs")
    # plt.ylabel("Loss")
    # pyplot.plot(history.history['loss'], label='train')
    # pyplot.plot(history.history['val_loss'], label='validation')
    # pyplot.legend()
    # pyplot.savefig(fname2+'.png',dpi='figure')
    # pyplot.show()


    # # option 2
    # model = Sequential()
    # model.add(Bidirectional(LSTM(units= 128), input_shape=(input_dim, feature_size)))
    # model.add(Dense(64))
    # model.add(Dense(units=output_dim))
    # model.compile(optimizer=Adam(lr = 0.001), loss='mean_squared_error', metrics=['accuracy'])
    # # Common code
    # callbacks = [
    # EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    # ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    # ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    # fname1 = img_file+'.png'
    # tensorflow.keras.utils.plot_model(
    #     model, to_file=fname1, show_shapes=True, show_dtype=False,
    #     show_layer_names=True, expand_nested=False, dpi=96,
    #     layer_range=None, show_layer_activations=False
    # )
    # history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # # plot loss
    # fname2 = img_file+'-'+ma
    # plt.title(img_file+'-'+ma_' Loss')
    # plt.xlabel("Epochs")
    # plt.ylabel("Loss")
    # pyplot.plot(history.history['loss'], label='train')
    # pyplot.plot(history.history['val_loss'], label='validation')
    # pyplot.legend()
    # pyplot.savefig(fname2+'.png',dpi='figure')
    # pyplot.show()




    # Option 3
    # define custom activation
    # 
    class Double_Tanh(Activation):
        def __init__(self, activation, **kwargs):
            super(Double_Tanh, self).__init__(activation, **kwargs)
            self.__name__ = 'double_tanh'

    def double_tanh(x):
        return (K.tanh(x) * 2)

    get_custom_objects().update({'double_tanh':Double_Tanh(double_tanh)})
        # Model Generation
    model = Sequential()
    #check https://machinelearningmastery.com/use-weight-regularization-lstm-networks-time-series-forecasting/
    model.add(LSTM(25, input_shape=(input_dim, feature_size), dropout=0.2, kernel_regularizer=l1_l2(0.00,0.00), bias_regularizer=l1_l2(0.00,0.00)))
    model.add(Dense(1))
    model.add(Activation(double_tanh))
    model.compile(loss='mean_squared_error', optimizer='adam', metrics=['mse', 'mae'])
    # Common code
    callbacks = [
    EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    ModelCheckpoint('LSTM3.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    fname1 = img_file+'.png'
    tensorflow.keras.utils.plot_model(
        model, to_file=fname1, show_shapes=True, show_dtype=False,
        show_layer_names=True, expand_nested=False, dpi=96,
        layer_range=None, show_layer_activations=False
    )
    history = model.fit(X_train, y_train, epochs=500, batch_size=int( optimized_period[ma]), verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # plot loss
    fname2 = img_file+'-'+ma
    plt.title(img_file+'-'+ma+' Loss')
    plt.xlabel("Epochs")
    plt.ylabel("Loss")
    pyplot.plot(history.history['loss'], label='train')
    pyplot.plot(history.history['val_loss'], label='validation')
    pyplot.legend()
    pyplot.savefig(fname2+'.png',dpi='figure')
    pyplot.show()

    # Option 4
    # Set up & fit LSTM RNN
    # model = Sequential()
    # model.add(LSTM(units=lstm_len, return_sequences=True, input_shape=(x_train.shape[1], 1)))
    # model.add(LSTM(units=int(lstm_len/2)))
    # model.add(Dense(1, activation='sigmoid'))
    # model.compile(loss='mean_squared_error', optimizer='adam')
    # # Common code
    # callbacks = [
    # EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    # ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    # ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    # fname1 = img_file+'.png'
    # tensorflow.keras.utils.plot_model(
    #     model, to_file=fname1, show_shapes=True, show_dtype=False,
    #     show_layer_names=True, expand_nested=False, dpi=96,
    #     layer_range=None, show_layer_activations=False
    # )
    # history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # # plot loss
    # fname2 = img_file+'-'+ma
    # plt.title(img_file+'-'+ma_' Loss')
    # plt.xlabel("Epochs")
    # plt.ylabel("Loss")
    # pyplot.plot(history.history['loss'], label='train')
    # pyplot.plot(history.history['val_loss'], label='validation')
    # pyplot.legend()
    # pyplot.savefig(fname2+'.png',dpi='figure')
    # pyplot.show()



    # Generate predictions
    predictiontr = model.predict(X_train, verbose=0)
    predictiontr = y_scaler.inverse_transform(predictiontr).tolist()
    outputtr = []
    for i in range(len(predictiontr)):
        outputtr.extend(predictiontr[i])
    predictiontr = outputtr
    # Generate error data

    ## replace with yc , xtest generated by new multistep method
    mse_tr = mean_squared_error(y_train, predictiontr)
    rmse_tr = mse_tr ** 0.5
    # mape = mean_absolute_percentage_error(X_test, pd.Series(predictiontr))
    mae_tr = mean_absolute_error(y_train, pd.Series(predictiontr))
    # Original_tr = pd.Series(yc_train)
    Original_tr = y_scaler.inverse_transform(y_train).flatten().tolist()


    predictionte = model.predict(X_test, verbose=0)
    predictionte = (y_scaler.inverse_transform(predictionte)-det).tolist()
    outputte = []
    for i in range(len(predictionte)):
        outputte.extend(predictionte[i])
    predictionte = outputte
    # Generate error data

    mse_te = mean_squared_error(y_test, predictionte)
    rmse_te = mse_te ** 0.5
    # mape = mean_absolute_percentage_error(X_test, pd.Series(predictionte))
    mae_te = mean_absolute_error(y_test, pd.Series(predictionte))
    # Original_te = pd.Series(yc_test)
    Original_te = y_scaler.inverse_transform(y_test).flatten().tolist()

    return Original_tr, predictiontr, mse_tr, rmse_tr,mae_tr,Original_te,predictionte, mse_te,rmse_te,mae_te
In [ ]:
if __name__ == '__main__':
    start_time = timeit.default_timer()
    simulation3 = {}
    imgfile = 'Experiment3'
    for ma in optimized_period:
              print(ma)
              print(functions[ma])
              print ( int( optimized_period[ma]))
            # if ma == 'SMA':
              low_vol = df.apply(lambda c:  functions[ma](c, timeperiod = int( optimized_period[ma])))
              low_vol = low_vol.fillna(0)
              low_vol_data = df['close']
              high_vol = pd.DataFrame()
              df2 = df.copy()
              for i in df2.columns:
                if i in low_vol.columns:
                  high_vol[i] = df2[i].subtract(low_vol[i], fill_value=0)
              high_vol_data = df['close']
              ## *****************************************************
              # Generate ARIMA and LSTM predictions
              print('\nWorking on ' + ma + ' predictions')
              try:
                print('parameters used : ', train_len, test_len)
                low_vol_Original, low_vol_prediction, low_vol_mse, low_vol_rmse,low_vol_mae = get_arima(low_vol,low_vol_data, train_len, test_len)
              except:
                  print('ARIMA error, skipping to next MA type')
                  continue
              Original_tr, predictiontr, mse_tr, rmse_tr,mae_tr,high_vol_Original, high_vol_prediction, high_vol_mse, high_vol_rmse,high_vol_mae, = get_lstm(high_vol,high_vol_data, train_len, test_len,imgfile,ma)
              final_prediction_tr = df['close'].head(train_len).values + pd.Series(predictiontr) # ignoring first 3 steps 
              mse_ftr = mean_squared_error(df['close'].head(train_len).values,final_prediction_tr.values)
              rmse_ftr = mse_ftr ** 0.5
              mape_ftr = mean_absolute_percentage_error(df['close'].head(train_len).reset_index(drop=True), final_prediction_tr)
              mae_ftr = mean_absolute_error(df['close'].head(train_len).reset_index(drop=True), final_prediction_tr)

              final_prediction = pd.Series(low_vol_prediction[3:]) + pd.Series(high_vol_prediction)
              mse = mean_squared_error(df['close'].tail(test_len).values,final_prediction.values)
              rmse = mse ** 0.5
              mape = mean_absolute_percentage_error(df['close'].tail(test_len).reset_index(drop=True), final_prediction)
              mae = mean_absolute_error(df['close'].tail(test_len).reset_index(drop=True), final_prediction)
              # Generate prediction accuracy
              actual = df['close'].tail(test_len).values
              result_1 = []
              result_2 = []
              for i in range(1, len(final_prediction)):
                  # Compare prediction to previous close price
                  if final_prediction[i] > actual[i-1] and actual[i] > actual[i-1]:
                      result_1.append(1)
                  elif final_prediction[i] < actual[i-1] and actual[i] < actual[i-1]:
                      result_1.append(1)
                  else:
                      result_1.append(0)

                  # Compare prediction to previous prediction
                  if final_prediction[i] > final_prediction[i-1] and actual[i] > actual[i-1]:
                      result_2.append(1)
                  elif final_prediction[i] < final_prediction[i-1] and actual[i] < actual[i-1]:
                      result_2.append(1)
                  else:
                      result_2.append(0)

              accuracy_1 = np.mean(result_1)
              accuracy_2 = np.mean(result_2)

              simulation3[ma] = {'low_vol': {'original':list(low_vol_Original), 'prediction': list(low_vol_prediction) , 'mse': low_vol_mse,
                                            'rmse': low_vol_rmse, 'mae' : low_vol_mae},
                                'high_vol': {'original':list(high_vol_Original),'prediction': list(high_vol_prediction), 'mse': high_vol_mse,
                                            'rmse': high_vol_rmse, 'mae' : high_vol_mae},
                                'final_tr': {'original':df['close'].head(train_len).tolist(),'prediction': final_prediction_tr.values.tolist(), 'mse': mse_ftr,
                                            'rmse': rmse_ftr, 'mae' : mae_ftr},
                                'final': {'original': df['close'].tail(test_len).tolist(), 'prediction': final_prediction.values.tolist(), 'mse': mse,
                                          'rmse': rmse, 'mae': mae },
                                'accuracy': {'prediction vs close': accuracy_1, 'prediction vs prediction': accuracy_2}}

              # save simulation data here as checkpoint
              with open('simulation3_data.json', 'w') as fp:
                  json.dump(simulation3, fp)

              for ma in simulation3.keys():
                  print('\n' + ma)
                  print('Prediction vs Close:\t\t' + str(round(100*simulation3[ma]['accuracy']['prediction vs close'], 2))
                        + '% Accuracy')
                  print('Prediction vs Prediction:\t' + str(round(100*simulation3[ma]['accuracy']['prediction vs prediction'], 2))
                        + '% Accuracy')
                  print('MSE:\t', simulation3[ma]['final']['mse'],
                        '\nRMSE:\t', simulation3[ma]['final']['rmse'],
                        '\nMAPE:\t', simulation3[ma]['final']['mae'])#,
                        # '\nMAPE:\t', simulation[ma]['final']['mape'])
            # else:
            #   break
    elapsed = timeit.default_timer() - start_time
    print('Runtime: mins:',elapsed/60)
SMA
SMA([input_arrays], [timeperiod=30])

Simple Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
17

Working on SMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.68 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4157.020, Time=0.04 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3687.148, Time=0.06 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.28 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3458.651, Time=0.09 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3322.133, Time=0.12 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=0.97 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=1.05 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3324.133, Time=0.27 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 3.581 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1657.067
Date:                Sun, 12 Dec 2021   AIC                           3322.133
Time:                        12:40:44   BIC                           3340.897
Sample:                             0   HQIC                          3329.339
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1966      0.003   -387.226      0.000      -1.203      -1.191
ar.L2         -0.8952      0.006   -138.692      0.000      -0.908      -0.883
ar.L3         -0.3968      0.006    -68.284      0.000      -0.408      -0.385
sigma2         3.5858      0.017    214.535      0.000       3.553       3.619
===================================================================================
Ljung-Box (L1) (Q):                  14.47   Jarque-Bera (JB):           2428881.42
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                             3.90
Prob(H) (two-sided):                  0.00   Kurtosis:                       271.99
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.36232, saving model to LSTM3.h5
48/48 - 3s - loss: 0.2069 - mse: 0.2069 - mae: 0.3363 - val_loss: 0.3623 - val_mse: 0.3623 - val_mae: 0.5924 - lr: 0.0010 - 3s/epoch - 65ms/step
Epoch 2/500

Epoch 00002: val_loss improved from 0.36232 to 0.17260, saving model to LSTM3.h5
48/48 - 0s - loss: 0.0285 - mse: 0.0285 - mae: 0.1319 - val_loss: 0.1726 - val_mse: 0.1726 - val_mae: 0.4047 - lr: 0.0010 - 353ms/epoch - 7ms/step
Epoch 3/500

Epoch 00003: val_loss improved from 0.17260 to 0.08198, saving model to LSTM3.h5
48/48 - 0s - loss: 0.0180 - mse: 0.0180 - mae: 0.1071 - val_loss: 0.0820 - val_mse: 0.0820 - val_mae: 0.2728 - lr: 0.0010 - 367ms/epoch - 8ms/step
Epoch 4/500

Epoch 00004: val_loss improved from 0.08198 to 0.05134, saving model to LSTM3.h5
48/48 - 0s - loss: 0.0160 - mse: 0.0160 - mae: 0.1013 - val_loss: 0.0513 - val_mse: 0.0513 - val_mae: 0.2104 - lr: 0.0010 - 333ms/epoch - 7ms/step
Epoch 5/500

Epoch 00005: val_loss improved from 0.05134 to 0.03382, saving model to LSTM3.h5
48/48 - 0s - loss: 0.0141 - mse: 0.0141 - mae: 0.0960 - val_loss: 0.0338 - val_mse: 0.0338 - val_mae: 0.1655 - lr: 0.0010 - 341ms/epoch - 7ms/step
Epoch 6/500

Epoch 00006: val_loss improved from 0.03382 to 0.02675, saving model to LSTM3.h5
48/48 - 0s - loss: 0.0142 - mse: 0.0142 - mae: 0.0954 - val_loss: 0.0267 - val_mse: 0.0267 - val_mae: 0.1439 - lr: 0.0010 - 343ms/epoch - 7ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.02675
48/48 - 0s - loss: 0.0124 - mse: 0.0124 - mae: 0.0894 - val_loss: 0.0276 - val_mse: 0.0276 - val_mae: 0.1466 - lr: 0.0010 - 341ms/epoch - 7ms/step
Epoch 8/500

Epoch 00008: val_loss improved from 0.02675 to 0.02619, saving model to LSTM3.h5
48/48 - 0s - loss: 0.0143 - mse: 0.0143 - mae: 0.0972 - val_loss: 0.0262 - val_mse: 0.0262 - val_mae: 0.1422 - lr: 0.0010 - 346ms/epoch - 7ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.02619
48/48 - 0s - loss: 0.0122 - mse: 0.0122 - mae: 0.0879 - val_loss: 0.0262 - val_mse: 0.0262 - val_mae: 0.1425 - lr: 0.0010 - 330ms/epoch - 7ms/step
Epoch 10/500

Epoch 00010: val_loss improved from 0.02619 to 0.01865, saving model to LSTM3.h5
48/48 - 0s - loss: 0.0134 - mse: 0.0134 - mae: 0.0908 - val_loss: 0.0187 - val_mse: 0.0187 - val_mae: 0.1158 - lr: 0.0010 - 322ms/epoch - 7ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.01865
48/48 - 0s - loss: 0.0126 - mse: 0.0126 - mae: 0.0902 - val_loss: 0.0296 - val_mse: 0.0296 - val_mae: 0.1537 - lr: 0.0010 - 333ms/epoch - 7ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.01865
48/48 - 0s - loss: 0.0135 - mse: 0.0135 - mae: 0.0915 - val_loss: 0.0223 - val_mse: 0.0223 - val_mae: 0.1294 - lr: 0.0010 - 315ms/epoch - 7ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.01865
48/48 - 0s - loss: 0.0117 - mse: 0.0117 - mae: 0.0853 - val_loss: 0.0217 - val_mse: 0.0217 - val_mae: 0.1271 - lr: 0.0010 - 323ms/epoch - 7ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.01865
48/48 - 0s - loss: 0.0126 - mse: 0.0126 - mae: 0.0885 - val_loss: 0.0242 - val_mse: 0.0242 - val_mae: 0.1365 - lr: 0.0010 - 336ms/epoch - 7ms/step
Epoch 15/500

Epoch 00015: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00015: val_loss did not improve from 0.01865
48/48 - 0s - loss: 0.0129 - mse: 0.0129 - mae: 0.0892 - val_loss: 0.0219 - val_mse: 0.0219 - val_mae: 0.1280 - lr: 0.0010 - 396ms/epoch - 8ms/step
Epoch 16/500

Epoch 00016: val_loss improved from 0.01865 to 0.01294, saving model to LSTM3.h5
48/48 - 0s - loss: 0.0235 - mse: 0.0235 - mae: 0.1254 - val_loss: 0.0129 - val_mse: 0.0129 - val_mae: 0.0940 - lr: 1.0000e-04 - 325ms/epoch - 7ms/step
Epoch 17/500

Epoch 00017: val_loss improved from 0.01294 to 0.01156, saving model to LSTM3.h5
48/48 - 0s - loss: 0.0077 - mse: 0.0077 - mae: 0.0724 - val_loss: 0.0116 - val_mse: 0.0116 - val_mae: 0.0883 - lr: 1.0000e-04 - 353ms/epoch - 7ms/step
Epoch 18/500

Epoch 00018: val_loss improved from 0.01156 to 0.01119, saving model to LSTM3.h5
48/48 - 0s - loss: 0.0069 - mse: 0.0069 - mae: 0.0673 - val_loss: 0.0112 - val_mse: 0.0112 - val_mae: 0.0868 - lr: 1.0000e-04 - 346ms/epoch - 7ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.01119
48/48 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0651 - val_loss: 0.0113 - val_mse: 0.0113 - val_mae: 0.0873 - lr: 1.0000e-04 - 300ms/epoch - 6ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.01119
48/48 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0643 - val_loss: 0.0115 - val_mse: 0.0115 - val_mae: 0.0883 - lr: 1.0000e-04 - 334ms/epoch - 7ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.01119
48/48 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0636 - val_loss: 0.0116 - val_mse: 0.0116 - val_mae: 0.0891 - lr: 1.0000e-04 - 329ms/epoch - 7ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.01119
48/48 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0646 - val_loss: 0.0118 - val_mse: 0.0118 - val_mae: 0.0898 - lr: 1.0000e-04 - 322ms/epoch - 7ms/step
Epoch 23/500

Epoch 00023: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00023: val_loss did not improve from 0.01119
48/48 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0612 - val_loss: 0.0124 - val_mse: 0.0124 - val_mae: 0.0925 - lr: 1.0000e-04 - 320ms/epoch - 7ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.01119
48/48 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0587 - val_loss: 0.0124 - val_mse: 0.0124 - val_mae: 0.0925 - lr: 1.0000e-05 - 401ms/epoch - 8ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.01119
48/48 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0579 - val_loss: 0.0123 - val_mse: 0.0123 - val_mae: 0.0922 - lr: 1.0000e-05 - 314ms/epoch - 7ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.01119
48/48 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0596 - val_loss: 0.0122 - val_mse: 0.0122 - val_mae: 0.0920 - lr: 1.0000e-05 - 334ms/epoch - 7ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.01119
48/48 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0592 - val_loss: 0.0122 - val_mse: 0.0122 - val_mae: 0.0917 - lr: 1.0000e-05 - 311ms/epoch - 6ms/step
Epoch 28/500

Epoch 00028: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00028: val_loss did not improve from 0.01119
48/48 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0604 - val_loss: 0.0122 - val_mse: 0.0122 - val_mae: 0.0919 - lr: 1.0000e-05 - 316ms/epoch - 7ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.01119
48/48 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0594 - val_loss: 0.0121 - val_mse: 0.0121 - val_mae: 0.0916 - lr: 1.0000e-05 - 343ms/epoch - 7ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.01119
48/48 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0595 - val_loss: 0.0120 - val_mse: 0.0120 - val_mae: 0.0912 - lr: 1.0000e-05 - 311ms/epoch - 6ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.01119
48/48 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0588 - val_loss: 0.0120 - val_mse: 0.0120 - val_mae: 0.0910 - lr: 1.0000e-05 - 310ms/epoch - 6ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.01119
48/48 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0566 - val_loss: 0.0120 - val_mse: 0.0120 - val_mae: 0.0911 - lr: 1.0000e-05 - 319ms/epoch - 7ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.01119
48/48 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0590 - val_loss: 0.0121 - val_mse: 0.0121 - val_mae: 0.0913 - lr: 1.0000e-05 - 302ms/epoch - 6ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.01119
48/48 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0594 - val_loss: 0.0121 - val_mse: 0.0121 - val_mae: 0.0914 - lr: 1.0000e-05 - 333ms/epoch - 7ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.01119
48/48 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0561 - val_loss: 0.0121 - val_mse: 0.0121 - val_mae: 0.0916 - lr: 1.0000e-05 - 313ms/epoch - 7ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.01119
48/48 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0575 - val_loss: 0.0122 - val_mse: 0.0122 - val_mae: 0.0919 - lr: 1.0000e-05 - 335ms/epoch - 7ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.01119
48/48 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0603 - val_loss: 0.0122 - val_mse: 0.0122 - val_mae: 0.0920 - lr: 1.0000e-05 - 309ms/epoch - 6ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.01119
48/48 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0594 - val_loss: 0.0122 - val_mse: 0.0122 - val_mae: 0.0919 - lr: 1.0000e-05 - 340ms/epoch - 7ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.01119
48/48 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0580 - val_loss: 0.0122 - val_mse: 0.0122 - val_mae: 0.0918 - lr: 1.0000e-05 - 319ms/epoch - 7ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.01119
48/48 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0565 - val_loss: 0.0122 - val_mse: 0.0122 - val_mae: 0.0920 - lr: 1.0000e-05 - 327ms/epoch - 7ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.01119
48/48 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0586 - val_loss: 0.0122 - val_mse: 0.0122 - val_mae: 0.0922 - lr: 1.0000e-05 - 346ms/epoch - 7ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.01119
48/48 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0574 - val_loss: 0.0122 - val_mse: 0.0122 - val_mae: 0.0922 - lr: 1.0000e-05 - 329ms/epoch - 7ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.01119
48/48 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0603 - val_loss: 0.0122 - val_mse: 0.0122 - val_mae: 0.0921 - lr: 1.0000e-05 - 327ms/epoch - 7ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.01119
48/48 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0607 - val_loss: 0.0122 - val_mse: 0.0122 - val_mae: 0.0918 - lr: 1.0000e-05 - 314ms/epoch - 7ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.01119
48/48 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0587 - val_loss: 0.0122 - val_mse: 0.0122 - val_mae: 0.0919 - lr: 1.0000e-05 - 320ms/epoch - 7ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.01119
48/48 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0586 - val_loss: 0.0122 - val_mse: 0.0122 - val_mae: 0.0920 - lr: 1.0000e-05 - 309ms/epoch - 6ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.01119
48/48 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0559 - val_loss: 0.0122 - val_mse: 0.0122 - val_mae: 0.0920 - lr: 1.0000e-05 - 327ms/epoch - 7ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.01119
48/48 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0569 - val_loss: 0.0123 - val_mse: 0.0123 - val_mae: 0.0923 - lr: 1.0000e-05 - 350ms/epoch - 7ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.01119
48/48 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0555 - val_loss: 0.0123 - val_mse: 0.0123 - val_mae: 0.0923 - lr: 1.0000e-05 - 340ms/epoch - 7ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.01119
48/48 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0561 - val_loss: 0.0124 - val_mse: 0.0124 - val_mae: 0.0928 - lr: 1.0000e-05 - 322ms/epoch - 7ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.01119
48/48 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0588 - val_loss: 0.0125 - val_mse: 0.0125 - val_mae: 0.0935 - lr: 1.0000e-05 - 322ms/epoch - 7ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.01119
48/48 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0587 - val_loss: 0.0126 - val_mse: 0.0126 - val_mae: 0.0937 - lr: 1.0000e-05 - 306ms/epoch - 6ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.01119
48/48 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0576 - val_loss: 0.0126 - val_mse: 0.0126 - val_mae: 0.0939 - lr: 1.0000e-05 - 316ms/epoch - 7ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.01119
48/48 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0576 - val_loss: 0.0127 - val_mse: 0.0127 - val_mae: 0.0941 - lr: 1.0000e-05 - 427ms/epoch - 9ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.01119
48/48 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0551 - val_loss: 0.0127 - val_mse: 0.0127 - val_mae: 0.0942 - lr: 1.0000e-05 - 320ms/epoch - 7ms/step
Epoch 56/500

Epoch 00056: val_loss did not improve from 0.01119
48/48 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0561 - val_loss: 0.0127 - val_mse: 0.0127 - val_mae: 0.0941 - lr: 1.0000e-05 - 314ms/epoch - 7ms/step
Epoch 57/500

Epoch 00057: val_loss did not improve from 0.01119
48/48 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0571 - val_loss: 0.0127 - val_mse: 0.0127 - val_mae: 0.0943 - lr: 1.0000e-05 - 317ms/epoch - 7ms/step
Epoch 58/500

Epoch 00058: val_loss did not improve from 0.01119
48/48 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0553 - val_loss: 0.0127 - val_mse: 0.0127 - val_mae: 0.0943 - lr: 1.0000e-05 - 309ms/epoch - 6ms/step
Epoch 59/500

Epoch 00059: val_loss did not improve from 0.01119
48/48 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0573 - val_loss: 0.0128 - val_mse: 0.0128 - val_mae: 0.0946 - lr: 1.0000e-05 - 307ms/epoch - 6ms/step
Epoch 60/500

Epoch 00060: val_loss did not improve from 0.01119
48/48 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0548 - val_loss: 0.0128 - val_mse: 0.0128 - val_mae: 0.0948 - lr: 1.0000e-05 - 331ms/epoch - 7ms/step
Epoch 61/500

Epoch 00061: val_loss did not improve from 0.01119
48/48 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0554 - val_loss: 0.0128 - val_mse: 0.0128 - val_mae: 0.0948 - lr: 1.0000e-05 - 317ms/epoch - 7ms/step
Epoch 62/500

Epoch 00062: val_loss did not improve from 0.01119
48/48 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0560 - val_loss: 0.0129 - val_mse: 0.0129 - val_mae: 0.0952 - lr: 1.0000e-05 - 301ms/epoch - 6ms/step
Epoch 63/500

Epoch 00063: val_loss did not improve from 0.01119
48/48 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0561 - val_loss: 0.0130 - val_mse: 0.0130 - val_mae: 0.0953 - lr: 1.0000e-05 - 327ms/epoch - 7ms/step
Epoch 64/500

Epoch 00064: val_loss did not improve from 0.01119
48/48 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0531 - val_loss: 0.0129 - val_mse: 0.0129 - val_mae: 0.0951 - lr: 1.0000e-05 - 306ms/epoch - 6ms/step
Epoch 65/500

Epoch 00065: val_loss did not improve from 0.01119
48/48 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0551 - val_loss: 0.0129 - val_mse: 0.0129 - val_mae: 0.0950 - lr: 1.0000e-05 - 325ms/epoch - 7ms/step
Epoch 66/500

Epoch 00066: val_loss did not improve from 0.01119
48/48 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0547 - val_loss: 0.0130 - val_mse: 0.0130 - val_mae: 0.0955 - lr: 1.0000e-05 - 321ms/epoch - 7ms/step
Epoch 67/500

Epoch 00067: val_loss did not improve from 0.01119
48/48 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0575 - val_loss: 0.0130 - val_mse: 0.0130 - val_mae: 0.0956 - lr: 1.0000e-05 - 311ms/epoch - 6ms/step
Epoch 68/500

Epoch 00068: val_loss did not improve from 0.01119
48/48 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0560 - val_loss: 0.0131 - val_mse: 0.0131 - val_mae: 0.0958 - lr: 1.0000e-05 - 310ms/epoch - 6ms/step
Epoch 00068: early stopping
SMA
Prediction vs Close:		52.61% Accuracy
Prediction vs Prediction:	50.75% Accuracy
MSE:	 60.72793966434877 
RMSE:	 7.792813334370892 
MAPE:	 6.245110563496496
EMA
EMA([input_arrays], [timeperiod=30])

Exponential Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
51

Working on EMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.59 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4231.556, Time=0.04 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3761.238, Time=0.06 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.39 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3532.227, Time=0.11 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3394.496, Time=0.13 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=1.13 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=0.87 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3396.496, Time=0.26 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 3.590 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1693.248
Date:                Sun, 12 Dec 2021   AIC                           3394.496
Time:                        12:42:57   BIC                           3413.260
Sample:                             0   HQIC                          3401.702
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1982      0.003   -389.569      0.000      -1.204      -1.192
ar.L2         -0.8976      0.006   -139.811      0.000      -0.910      -0.885
ar.L3         -0.3984      0.006    -68.662      0.000      -0.410      -0.387
sigma2         3.9230      0.018    215.372      0.000       3.887       3.959
===================================================================================
Ljung-Box (L1) (Q):                  14.54   Jarque-Bera (JB):           2462173.05
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                             3.90
Prob(H) (two-sided):                  0.00   Kurtosis:                       273.82
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.39802, saving model to LSTM3.h5
16/16 - 3s - loss: 0.1339 - mse: 0.1339 - mae: 0.3073 - val_loss: 0.3980 - val_mse: 0.3980 - val_mae: 0.6034 - lr: 0.0010 - 3s/epoch - 184ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.39802
16/16 - 0s - loss: 0.0284 - mse: 0.0284 - mae: 0.1365 - val_loss: 0.4921 - val_mse: 0.4921 - val_mae: 0.6766 - lr: 0.0010 - 119ms/epoch - 7ms/step
Epoch 3/500

Epoch 00003: val_loss improved from 0.39802 to 0.34963, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0339 - mse: 0.0339 - mae: 0.1483 - val_loss: 0.3496 - val_mse: 0.3496 - val_mae: 0.5668 - lr: 0.0010 - 139ms/epoch - 9ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.34963
16/16 - 0s - loss: 0.0149 - mse: 0.0149 - mae: 0.0965 - val_loss: 0.3667 - val_mse: 0.3667 - val_mae: 0.5819 - lr: 0.0010 - 125ms/epoch - 8ms/step
Epoch 5/500

Epoch 00005: val_loss improved from 0.34963 to 0.31417, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0164 - mse: 0.0164 - mae: 0.1027 - val_loss: 0.3142 - val_mse: 0.3142 - val_mae: 0.5373 - lr: 0.0010 - 201ms/epoch - 13ms/step
Epoch 6/500

Epoch 00006: val_loss did not improve from 0.31417
16/16 - 0s - loss: 0.0141 - mse: 0.0141 - mae: 0.0931 - val_loss: 0.3175 - val_mse: 0.3175 - val_mae: 0.5408 - lr: 0.0010 - 126ms/epoch - 8ms/step
Epoch 7/500

Epoch 00007: val_loss improved from 0.31417 to 0.28925, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0143 - mse: 0.0143 - mae: 0.0942 - val_loss: 0.2892 - val_mse: 0.2892 - val_mae: 0.5153 - lr: 0.0010 - 151ms/epoch - 9ms/step
Epoch 8/500

Epoch 00008: val_loss improved from 0.28925 to 0.28776, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0103 - mse: 0.0103 - mae: 0.0789 - val_loss: 0.2878 - val_mse: 0.2878 - val_mae: 0.5139 - lr: 0.0010 - 156ms/epoch - 10ms/step
Epoch 9/500

Epoch 00009: val_loss improved from 0.28776 to 0.26242, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0102 - mse: 0.0102 - mae: 0.0799 - val_loss: 0.2624 - val_mse: 0.2624 - val_mae: 0.4898 - lr: 0.0010 - 174ms/epoch - 11ms/step
Epoch 10/500

Epoch 00010: val_loss improved from 0.26242 to 0.24907, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0091 - mse: 0.0091 - mae: 0.0749 - val_loss: 0.2491 - val_mse: 0.2491 - val_mae: 0.4767 - lr: 0.0010 - 147ms/epoch - 9ms/step
Epoch 11/500

Epoch 00011: val_loss improved from 0.24907 to 0.22973, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0091 - mse: 0.0091 - mae: 0.0767 - val_loss: 0.2297 - val_mse: 0.2297 - val_mae: 0.4568 - lr: 0.0010 - 156ms/epoch - 10ms/step
Epoch 12/500

Epoch 00012: val_loss improved from 0.22973 to 0.22391, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0089 - mse: 0.0089 - mae: 0.0745 - val_loss: 0.2239 - val_mse: 0.2239 - val_mae: 0.4507 - lr: 0.0010 - 157ms/epoch - 10ms/step
Epoch 13/500

Epoch 00013: val_loss improved from 0.22391 to 0.20284, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0073 - mse: 0.0073 - mae: 0.0677 - val_loss: 0.2028 - val_mse: 0.2028 - val_mae: 0.4277 - lr: 0.0010 - 198ms/epoch - 12ms/step
Epoch 14/500

Epoch 00014: val_loss improved from 0.20284 to 0.19709, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0070 - mse: 0.0070 - mae: 0.0660 - val_loss: 0.1971 - val_mse: 0.1971 - val_mae: 0.4211 - lr: 0.0010 - 146ms/epoch - 9ms/step
Epoch 15/500

Epoch 00015: val_loss improved from 0.19709 to 0.18772, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0083 - mse: 0.0083 - mae: 0.0718 - val_loss: 0.1877 - val_mse: 0.1877 - val_mae: 0.4103 - lr: 0.0010 - 153ms/epoch - 10ms/step
Epoch 16/500

Epoch 00016: val_loss improved from 0.18772 to 0.16471, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0082 - mse: 0.0082 - mae: 0.0724 - val_loss: 0.1647 - val_mse: 0.1647 - val_mae: 0.3824 - lr: 0.0010 - 150ms/epoch - 9ms/step
Epoch 17/500

Epoch 00017: val_loss improved from 0.16471 to 0.16153, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0609 - val_loss: 0.1615 - val_mse: 0.1615 - val_mae: 0.3779 - lr: 0.0010 - 164ms/epoch - 10ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.16153
16/16 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0602 - val_loss: 0.1621 - val_mse: 0.1621 - val_mae: 0.3785 - lr: 0.0010 - 138ms/epoch - 9ms/step
Epoch 19/500

Epoch 00019: val_loss improved from 0.16153 to 0.14689, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0068 - mse: 0.0068 - mae: 0.0652 - val_loss: 0.1469 - val_mse: 0.1469 - val_mae: 0.3588 - lr: 0.0010 - 203ms/epoch - 13ms/step
Epoch 20/500

Epoch 00020: val_loss improved from 0.14689 to 0.13731, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0605 - val_loss: 0.1373 - val_mse: 0.1373 - val_mae: 0.3458 - lr: 0.0010 - 142ms/epoch - 9ms/step
Epoch 21/500

Epoch 00021: val_loss improved from 0.13731 to 0.13575, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0587 - val_loss: 0.1358 - val_mse: 0.1358 - val_mae: 0.3438 - lr: 0.0010 - 152ms/epoch - 9ms/step
Epoch 22/500

Epoch 00022: val_loss improved from 0.13575 to 0.12955, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0635 - val_loss: 0.1296 - val_mse: 0.1296 - val_mae: 0.3351 - lr: 0.0010 - 150ms/epoch - 9ms/step
Epoch 23/500

Epoch 00023: val_loss improved from 0.12955 to 0.11616, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0609 - val_loss: 0.1162 - val_mse: 0.1162 - val_mae: 0.3151 - lr: 0.0010 - 153ms/epoch - 10ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.11616
16/16 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0604 - val_loss: 0.1163 - val_mse: 0.1163 - val_mae: 0.3151 - lr: 0.0010 - 136ms/epoch - 8ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.11616
16/16 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0577 - val_loss: 0.1294 - val_mse: 0.1294 - val_mae: 0.3339 - lr: 0.0010 - 126ms/epoch - 8ms/step
Epoch 26/500

Epoch 00026: val_loss improved from 0.11616 to 0.11583, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0575 - val_loss: 0.1158 - val_mse: 0.1158 - val_mae: 0.3147 - lr: 0.0010 - 156ms/epoch - 10ms/step
Epoch 27/500

Epoch 00027: val_loss improved from 0.11583 to 0.10589, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0574 - val_loss: 0.1059 - val_mse: 0.1059 - val_mae: 0.2999 - lr: 0.0010 - 159ms/epoch - 10ms/step
Epoch 28/500

Epoch 00028: val_loss improved from 0.10589 to 0.10370, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0552 - val_loss: 0.1037 - val_mse: 0.1037 - val_mae: 0.2963 - lr: 0.0010 - 154ms/epoch - 10ms/step
Epoch 29/500

Epoch 00029: val_loss improved from 0.10370 to 0.10333, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0564 - val_loss: 0.1033 - val_mse: 0.1033 - val_mae: 0.2954 - lr: 0.0010 - 148ms/epoch - 9ms/step
Epoch 30/500

Epoch 00030: val_loss improved from 0.10333 to 0.09683, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0593 - val_loss: 0.0968 - val_mse: 0.0968 - val_mae: 0.2844 - lr: 0.0010 - 157ms/epoch - 10ms/step
Epoch 31/500

Epoch 00031: val_loss improved from 0.09683 to 0.08818, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0632 - val_loss: 0.0882 - val_mse: 0.0882 - val_mae: 0.2699 - lr: 0.0010 - 156ms/epoch - 10ms/step
Epoch 32/500

Epoch 00032: val_loss improved from 0.08818 to 0.07657, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0578 - val_loss: 0.0766 - val_mse: 0.0766 - val_mae: 0.2495 - lr: 0.0010 - 156ms/epoch - 10ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.07657
16/16 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0518 - val_loss: 0.0859 - val_mse: 0.0859 - val_mae: 0.2660 - lr: 0.0010 - 172ms/epoch - 11ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.07657
16/16 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0535 - val_loss: 0.0876 - val_mse: 0.0876 - val_mae: 0.2690 - lr: 0.0010 - 132ms/epoch - 8ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.07657
16/16 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0545 - val_loss: 0.0862 - val_mse: 0.0862 - val_mae: 0.2667 - lr: 0.0010 - 136ms/epoch - 9ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.07657
16/16 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0537 - val_loss: 0.0810 - val_mse: 0.0810 - val_mae: 0.2577 - lr: 0.0010 - 122ms/epoch - 8ms/step
Epoch 37/500

Epoch 00037: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00037: val_loss did not improve from 0.07657
16/16 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0553 - val_loss: 0.0869 - val_mse: 0.0869 - val_mae: 0.2678 - lr: 0.0010 - 133ms/epoch - 8ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.07657
16/16 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0517 - val_loss: 0.0832 - val_mse: 0.0832 - val_mae: 0.2615 - lr: 1.0000e-04 - 135ms/epoch - 8ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.07657
16/16 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0541 - val_loss: 0.0803 - val_mse: 0.0803 - val_mae: 0.2565 - lr: 1.0000e-04 - 132ms/epoch - 8ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.07657
16/16 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0511 - val_loss: 0.0810 - val_mse: 0.0810 - val_mae: 0.2578 - lr: 1.0000e-04 - 135ms/epoch - 8ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.07657
16/16 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0509 - val_loss: 0.0809 - val_mse: 0.0809 - val_mae: 0.2577 - lr: 1.0000e-04 - 128ms/epoch - 8ms/step
Epoch 42/500

Epoch 00042: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00042: val_loss did not improve from 0.07657
16/16 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0504 - val_loss: 0.0806 - val_mse: 0.0806 - val_mae: 0.2572 - lr: 1.0000e-04 - 133ms/epoch - 8ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.07657
16/16 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0488 - val_loss: 0.0807 - val_mse: 0.0807 - val_mae: 0.2573 - lr: 1.0000e-05 - 119ms/epoch - 7ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.07657
16/16 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0508 - val_loss: 0.0808 - val_mse: 0.0808 - val_mae: 0.2574 - lr: 1.0000e-05 - 144ms/epoch - 9ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.07657
16/16 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0520 - val_loss: 0.0808 - val_mse: 0.0808 - val_mae: 0.2576 - lr: 1.0000e-05 - 176ms/epoch - 11ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.07657
16/16 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0481 - val_loss: 0.0810 - val_mse: 0.0810 - val_mae: 0.2579 - lr: 1.0000e-05 - 142ms/epoch - 9ms/step
Epoch 47/500

Epoch 00047: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00047: val_loss did not improve from 0.07657
16/16 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0483 - val_loss: 0.0812 - val_mse: 0.0812 - val_mae: 0.2583 - lr: 1.0000e-05 - 128ms/epoch - 8ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.07657
16/16 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0497 - val_loss: 0.0813 - val_mse: 0.0813 - val_mae: 0.2584 - lr: 1.0000e-05 - 137ms/epoch - 9ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.07657
16/16 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0500 - val_loss: 0.0813 - val_mse: 0.0813 - val_mae: 0.2584 - lr: 1.0000e-05 - 127ms/epoch - 8ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.07657
16/16 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0482 - val_loss: 0.0813 - val_mse: 0.0813 - val_mae: 0.2584 - lr: 1.0000e-05 - 130ms/epoch - 8ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.07657
16/16 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0504 - val_loss: 0.0813 - val_mse: 0.0813 - val_mae: 0.2584 - lr: 1.0000e-05 - 134ms/epoch - 8ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.07657
16/16 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0487 - val_loss: 0.0814 - val_mse: 0.0814 - val_mae: 0.2585 - lr: 1.0000e-05 - 129ms/epoch - 8ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.07657
16/16 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0491 - val_loss: 0.0813 - val_mse: 0.0813 - val_mae: 0.2584 - lr: 1.0000e-05 - 132ms/epoch - 8ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.07657
16/16 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0505 - val_loss: 0.0814 - val_mse: 0.0814 - val_mae: 0.2586 - lr: 1.0000e-05 - 141ms/epoch - 9ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.07657
16/16 - 0s - loss: 0.0037 - mse: 0.0037 - mae: 0.0479 - val_loss: 0.0814 - val_mse: 0.0814 - val_mae: 0.2585 - lr: 1.0000e-05 - 134ms/epoch - 8ms/step
Epoch 56/500

Epoch 00056: val_loss did not improve from 0.07657
16/16 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0500 - val_loss: 0.0812 - val_mse: 0.0812 - val_mae: 0.2583 - lr: 1.0000e-05 - 136ms/epoch - 9ms/step
Epoch 57/500

Epoch 00057: val_loss did not improve from 0.07657
16/16 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0483 - val_loss: 0.0813 - val_mse: 0.0813 - val_mae: 0.2584 - lr: 1.0000e-05 - 128ms/epoch - 8ms/step
Epoch 58/500

Epoch 00058: val_loss did not improve from 0.07657
16/16 - 0s - loss: 0.0035 - mse: 0.0035 - mae: 0.0467 - val_loss: 0.0814 - val_mse: 0.0814 - val_mae: 0.2586 - lr: 1.0000e-05 - 140ms/epoch - 9ms/step
Epoch 59/500

Epoch 00059: val_loss did not improve from 0.07657
16/16 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0505 - val_loss: 0.0815 - val_mse: 0.0815 - val_mae: 0.2587 - lr: 1.0000e-05 - 128ms/epoch - 8ms/step
Epoch 60/500

Epoch 00060: val_loss did not improve from 0.07657
16/16 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0491 - val_loss: 0.0814 - val_mse: 0.0814 - val_mae: 0.2586 - lr: 1.0000e-05 - 129ms/epoch - 8ms/step
Epoch 61/500

Epoch 00061: val_loss did not improve from 0.07657
16/16 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0513 - val_loss: 0.0814 - val_mse: 0.0814 - val_mae: 0.2586 - lr: 1.0000e-05 - 128ms/epoch - 8ms/step
Epoch 62/500

Epoch 00062: val_loss did not improve from 0.07657
16/16 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0493 - val_loss: 0.0814 - val_mse: 0.0814 - val_mae: 0.2587 - lr: 1.0000e-05 - 138ms/epoch - 9ms/step
Epoch 63/500

Epoch 00063: val_loss did not improve from 0.07657
16/16 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0482 - val_loss: 0.0814 - val_mse: 0.0814 - val_mae: 0.2587 - lr: 1.0000e-05 - 139ms/epoch - 9ms/step
Epoch 64/500

Epoch 00064: val_loss did not improve from 0.07657
16/16 - 0s - loss: 0.0037 - mse: 0.0037 - mae: 0.0477 - val_loss: 0.0814 - val_mse: 0.0814 - val_mae: 0.2586 - lr: 1.0000e-05 - 124ms/epoch - 8ms/step
Epoch 65/500

Epoch 00065: val_loss did not improve from 0.07657
16/16 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0509 - val_loss: 0.0813 - val_mse: 0.0813 - val_mae: 0.2584 - lr: 1.0000e-05 - 146ms/epoch - 9ms/step
Epoch 66/500

Epoch 00066: val_loss did not improve from 0.07657
16/16 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0487 - val_loss: 0.0812 - val_mse: 0.0812 - val_mae: 0.2584 - lr: 1.0000e-05 - 130ms/epoch - 8ms/step
Epoch 67/500

Epoch 00067: val_loss did not improve from 0.07657
16/16 - 0s - loss: 0.0037 - mse: 0.0037 - mae: 0.0475 - val_loss: 0.0812 - val_mse: 0.0812 - val_mae: 0.2583 - lr: 1.0000e-05 - 122ms/epoch - 8ms/step
Epoch 68/500

Epoch 00068: val_loss did not improve from 0.07657
16/16 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0500 - val_loss: 0.0811 - val_mse: 0.0811 - val_mae: 0.2581 - lr: 1.0000e-05 - 131ms/epoch - 8ms/step
Epoch 69/500

Epoch 00069: val_loss did not improve from 0.07657
16/16 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0516 - val_loss: 0.0811 - val_mse: 0.0811 - val_mae: 0.2581 - lr: 1.0000e-05 - 129ms/epoch - 8ms/step
Epoch 70/500

Epoch 00070: val_loss did not improve from 0.07657
16/16 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0500 - val_loss: 0.0810 - val_mse: 0.0810 - val_mae: 0.2580 - lr: 1.0000e-05 - 124ms/epoch - 8ms/step
Epoch 71/500

Epoch 00071: val_loss did not improve from 0.07657
16/16 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0513 - val_loss: 0.0808 - val_mse: 0.0808 - val_mae: 0.2576 - lr: 1.0000e-05 - 131ms/epoch - 8ms/step
Epoch 72/500

Epoch 00072: val_loss did not improve from 0.07657
16/16 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0515 - val_loss: 0.0807 - val_mse: 0.0807 - val_mae: 0.2575 - lr: 1.0000e-05 - 125ms/epoch - 8ms/step
Epoch 73/500

Epoch 00073: val_loss did not improve from 0.07657
16/16 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0482 - val_loss: 0.0808 - val_mse: 0.0808 - val_mae: 0.2576 - lr: 1.0000e-05 - 151ms/epoch - 9ms/step
Epoch 74/500

Epoch 00074: val_loss did not improve from 0.07657
16/16 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0505 - val_loss: 0.0808 - val_mse: 0.0808 - val_mae: 0.2577 - lr: 1.0000e-05 - 131ms/epoch - 8ms/step
Epoch 75/500

Epoch 00075: val_loss did not improve from 0.07657
16/16 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0495 - val_loss: 0.0807 - val_mse: 0.0807 - val_mae: 0.2575 - lr: 1.0000e-05 - 133ms/epoch - 8ms/step
Epoch 76/500

Epoch 00076: val_loss did not improve from 0.07657
16/16 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0509 - val_loss: 0.0806 - val_mse: 0.0806 - val_mae: 0.2574 - lr: 1.0000e-05 - 128ms/epoch - 8ms/step
Epoch 77/500

Epoch 00077: val_loss did not improve from 0.07657
16/16 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0502 - val_loss: 0.0807 - val_mse: 0.0807 - val_mae: 0.2574 - lr: 1.0000e-05 - 130ms/epoch - 8ms/step
Epoch 78/500

Epoch 00078: val_loss did not improve from 0.07657
16/16 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0488 - val_loss: 0.0807 - val_mse: 0.0807 - val_mae: 0.2575 - lr: 1.0000e-05 - 130ms/epoch - 8ms/step
Epoch 79/500

Epoch 00079: val_loss did not improve from 0.07657
16/16 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0490 - val_loss: 0.0807 - val_mse: 0.0807 - val_mae: 0.2575 - lr: 1.0000e-05 - 130ms/epoch - 8ms/step
Epoch 80/500

Epoch 00080: val_loss did not improve from 0.07657
16/16 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0490 - val_loss: 0.0806 - val_mse: 0.0806 - val_mae: 0.2572 - lr: 1.0000e-05 - 197ms/epoch - 12ms/step
Epoch 81/500

Epoch 00081: val_loss did not improve from 0.07657
16/16 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0510 - val_loss: 0.0805 - val_mse: 0.0805 - val_mae: 0.2572 - lr: 1.0000e-05 - 130ms/epoch - 8ms/step
Epoch 82/500

Epoch 00082: val_loss did not improve from 0.07657
16/16 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0500 - val_loss: 0.0803 - val_mse: 0.0803 - val_mae: 0.2569 - lr: 1.0000e-05 - 132ms/epoch - 8ms/step
Epoch 00082: early stopping
SMA
Prediction vs Close:		52.61% Accuracy
Prediction vs Prediction:	50.75% Accuracy
MSE:	 60.72793966434877 
RMSE:	 7.792813334370892 
MAPE:	 6.245110563496496

EMA
Prediction vs Close:		53.36% Accuracy
Prediction vs Prediction:	49.63% Accuracy
MSE:	 24.31424046870049 
RMSE:	 4.930947218202654 
MAPE:	 4.072073008160127
WMA
WMA([input_arrays], [timeperiod=30])

Weighted Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
49

Working on WMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.59 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4264.089, Time=0.04 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3793.930, Time=0.06 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.32 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3564.923, Time=0.10 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3427.258, Time=0.12 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=1.70 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=0.64 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3429.258, Time=0.24 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 3.813 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1709.629
Date:                Sun, 12 Dec 2021   AIC                           3427.258
Time:                        12:44:50   BIC                           3446.021
Sample:                             0   HQIC                          3434.464
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1981      0.003   -389.386      0.000      -1.204      -1.192
ar.L2         -0.8974      0.006   -139.699      0.000      -0.910      -0.885
ar.L3         -0.3983      0.006    -68.737      0.000      -0.410      -0.387
sigma2         4.0860      0.019    215.311      0.000       4.049       4.123
===================================================================================
Ljung-Box (L1) (Q):                  14.57   Jarque-Bera (JB):           2460901.70
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                             3.90
Prob(H) (two-sided):                  0.00   Kurtosis:                       273.75
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.20576, saving model to LSTM3.h5
17/17 - 3s - loss: 0.1644 - mse: 0.1644 - mae: 0.2939 - val_loss: 0.2058 - val_mse: 0.2058 - val_mae: 0.4444 - lr: 0.0010 - 3s/epoch - 179ms/step
Epoch 2/500

Epoch 00002: val_loss improved from 0.20576 to 0.12386, saving model to LSTM3.h5
17/17 - 0s - loss: 0.0361 - mse: 0.0361 - mae: 0.1547 - val_loss: 0.1239 - val_mse: 0.1239 - val_mae: 0.3397 - lr: 0.0010 - 169ms/epoch - 10ms/step
Epoch 3/500

Epoch 00003: val_loss improved from 0.12386 to 0.09635, saving model to LSTM3.h5
17/17 - 0s - loss: 0.0239 - mse: 0.0239 - mae: 0.1202 - val_loss: 0.0964 - val_mse: 0.0964 - val_mae: 0.2956 - lr: 0.0010 - 164ms/epoch - 10ms/step
Epoch 4/500

Epoch 00004: val_loss improved from 0.09635 to 0.06144, saving model to LSTM3.h5
17/17 - 0s - loss: 0.0161 - mse: 0.0161 - mae: 0.0996 - val_loss: 0.0614 - val_mse: 0.0614 - val_mae: 0.2271 - lr: 0.0010 - 151ms/epoch - 9ms/step
Epoch 5/500

Epoch 00005: val_loss improved from 0.06144 to 0.04145, saving model to LSTM3.h5
17/17 - 0s - loss: 0.0152 - mse: 0.0152 - mae: 0.0969 - val_loss: 0.0414 - val_mse: 0.0414 - val_mae: 0.1773 - lr: 0.0010 - 150ms/epoch - 9ms/step
Epoch 6/500

Epoch 00006: val_loss improved from 0.04145 to 0.02859, saving model to LSTM3.h5
17/17 - 0s - loss: 0.0118 - mse: 0.0118 - mae: 0.0858 - val_loss: 0.0286 - val_mse: 0.0286 - val_mae: 0.1427 - lr: 0.0010 - 159ms/epoch - 9ms/step
Epoch 7/500

Epoch 00007: val_loss improved from 0.02859 to 0.02033, saving model to LSTM3.h5
17/17 - 0s - loss: 0.0119 - mse: 0.0119 - mae: 0.0860 - val_loss: 0.0203 - val_mse: 0.0203 - val_mae: 0.1198 - lr: 0.0010 - 201ms/epoch - 12ms/step
Epoch 8/500

Epoch 00008: val_loss improved from 0.02033 to 0.01659, saving model to LSTM3.h5
17/17 - 0s - loss: 0.0098 - mse: 0.0098 - mae: 0.0780 - val_loss: 0.0166 - val_mse: 0.0166 - val_mae: 0.1087 - lr: 0.0010 - 157ms/epoch - 9ms/step
Epoch 9/500

Epoch 00009: val_loss improved from 0.01659 to 0.01485, saving model to LSTM3.h5
17/17 - 0s - loss: 0.0091 - mse: 0.0091 - mae: 0.0749 - val_loss: 0.0148 - val_mse: 0.0148 - val_mae: 0.1034 - lr: 0.0010 - 167ms/epoch - 10ms/step
Epoch 10/500

Epoch 00010: val_loss improved from 0.01485 to 0.01360, saving model to LSTM3.h5
17/17 - 0s - loss: 0.0086 - mse: 0.0086 - mae: 0.0728 - val_loss: 0.0136 - val_mse: 0.0136 - val_mae: 0.0995 - lr: 0.0010 - 158ms/epoch - 9ms/step
Epoch 11/500

Epoch 00011: val_loss improved from 0.01360 to 0.01305, saving model to LSTM3.h5
17/17 - 0s - loss: 0.0089 - mse: 0.0089 - mae: 0.0735 - val_loss: 0.0131 - val_mse: 0.0131 - val_mae: 0.0975 - lr: 0.0010 - 155ms/epoch - 9ms/step
Epoch 12/500

Epoch 00012: val_loss improved from 0.01305 to 0.01222, saving model to LSTM3.h5
17/17 - 0s - loss: 0.0070 - mse: 0.0070 - mae: 0.0654 - val_loss: 0.0122 - val_mse: 0.0122 - val_mae: 0.0946 - lr: 0.0010 - 164ms/epoch - 10ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.01222
17/17 - 0s - loss: 0.0075 - mse: 0.0075 - mae: 0.0675 - val_loss: 0.0123 - val_mse: 0.0123 - val_mae: 0.0940 - lr: 0.0010 - 135ms/epoch - 8ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.01222
17/17 - 0s - loss: 0.0071 - mse: 0.0071 - mae: 0.0667 - val_loss: 0.0123 - val_mse: 0.0123 - val_mae: 0.0933 - lr: 0.0010 - 135ms/epoch - 8ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.01222
17/17 - 0s - loss: 0.0078 - mse: 0.0078 - mae: 0.0675 - val_loss: 0.0128 - val_mse: 0.0128 - val_mae: 0.0934 - lr: 0.0010 - 173ms/epoch - 10ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.01222
17/17 - 0s - loss: 0.0070 - mse: 0.0070 - mae: 0.0662 - val_loss: 0.0136 - val_mse: 0.0136 - val_mae: 0.0943 - lr: 0.0010 - 140ms/epoch - 8ms/step
Epoch 17/500

Epoch 00017: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00017: val_loss did not improve from 0.01222
17/17 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0595 - val_loss: 0.0160 - val_mse: 0.0160 - val_mae: 0.0998 - lr: 0.0010 - 153ms/epoch - 9ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.01222
17/17 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0623 - val_loss: 0.0157 - val_mse: 0.0157 - val_mae: 0.0991 - lr: 1.0000e-04 - 148ms/epoch - 9ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.01222
17/17 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0588 - val_loss: 0.0155 - val_mse: 0.0155 - val_mae: 0.0984 - lr: 1.0000e-04 - 135ms/epoch - 8ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.01222
17/17 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0601 - val_loss: 0.0153 - val_mse: 0.0153 - val_mae: 0.0980 - lr: 1.0000e-04 - 188ms/epoch - 11ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.01222
17/17 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0620 - val_loss: 0.0152 - val_mse: 0.0152 - val_mae: 0.0975 - lr: 1.0000e-04 - 178ms/epoch - 10ms/step
Epoch 22/500

Epoch 00022: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00022: val_loss did not improve from 0.01222
17/17 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0608 - val_loss: 0.0150 - val_mse: 0.0150 - val_mae: 0.0969 - lr: 1.0000e-04 - 141ms/epoch - 8ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.01222
17/17 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0638 - val_loss: 0.0150 - val_mse: 0.0150 - val_mae: 0.0970 - lr: 1.0000e-05 - 133ms/epoch - 8ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.01222
17/17 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0582 - val_loss: 0.0151 - val_mse: 0.0151 - val_mae: 0.0971 - lr: 1.0000e-05 - 139ms/epoch - 8ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.01222
17/17 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0619 - val_loss: 0.0151 - val_mse: 0.0151 - val_mae: 0.0972 - lr: 1.0000e-05 - 140ms/epoch - 8ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.01222
17/17 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0627 - val_loss: 0.0151 - val_mse: 0.0151 - val_mae: 0.0973 - lr: 1.0000e-05 - 138ms/epoch - 8ms/step
Epoch 27/500

Epoch 00027: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00027: val_loss did not improve from 0.01222
17/17 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0595 - val_loss: 0.0152 - val_mse: 0.0152 - val_mae: 0.0973 - lr: 1.0000e-05 - 146ms/epoch - 9ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.01222
17/17 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0577 - val_loss: 0.0152 - val_mse: 0.0152 - val_mae: 0.0974 - lr: 1.0000e-05 - 193ms/epoch - 11ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.01222
17/17 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0596 - val_loss: 0.0152 - val_mse: 0.0152 - val_mae: 0.0973 - lr: 1.0000e-05 - 171ms/epoch - 10ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.01222
17/17 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0594 - val_loss: 0.0152 - val_mse: 0.0152 - val_mae: 0.0974 - lr: 1.0000e-05 - 139ms/epoch - 8ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.01222
17/17 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0576 - val_loss: 0.0152 - val_mse: 0.0152 - val_mae: 0.0975 - lr: 1.0000e-05 - 150ms/epoch - 9ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.01222
17/17 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0593 - val_loss: 0.0152 - val_mse: 0.0152 - val_mae: 0.0975 - lr: 1.0000e-05 - 139ms/epoch - 8ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.01222
17/17 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0593 - val_loss: 0.0153 - val_mse: 0.0153 - val_mae: 0.0976 - lr: 1.0000e-05 - 136ms/epoch - 8ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.01222
17/17 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0593 - val_loss: 0.0153 - val_mse: 0.0153 - val_mae: 0.0978 - lr: 1.0000e-05 - 155ms/epoch - 9ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.01222
17/17 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0615 - val_loss: 0.0154 - val_mse: 0.0154 - val_mae: 0.0978 - lr: 1.0000e-05 - 146ms/epoch - 9ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.01222
17/17 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0592 - val_loss: 0.0154 - val_mse: 0.0154 - val_mae: 0.0979 - lr: 1.0000e-05 - 128ms/epoch - 8ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.01222
17/17 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0589 - val_loss: 0.0155 - val_mse: 0.0155 - val_mae: 0.0980 - lr: 1.0000e-05 - 140ms/epoch - 8ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.01222
17/17 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0562 - val_loss: 0.0155 - val_mse: 0.0155 - val_mae: 0.0980 - lr: 1.0000e-05 - 140ms/epoch - 8ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.01222
17/17 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0604 - val_loss: 0.0155 - val_mse: 0.0155 - val_mae: 0.0981 - lr: 1.0000e-05 - 138ms/epoch - 8ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.01222
17/17 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0622 - val_loss: 0.0155 - val_mse: 0.0155 - val_mae: 0.0981 - lr: 1.0000e-05 - 142ms/epoch - 8ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.01222
17/17 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0594 - val_loss: 0.0155 - val_mse: 0.0155 - val_mae: 0.0981 - lr: 1.0000e-05 - 148ms/epoch - 9ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.01222
17/17 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0578 - val_loss: 0.0155 - val_mse: 0.0155 - val_mae: 0.0982 - lr: 1.0000e-05 - 131ms/epoch - 8ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.01222
17/17 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0604 - val_loss: 0.0155 - val_mse: 0.0155 - val_mae: 0.0982 - lr: 1.0000e-05 - 142ms/epoch - 8ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.01222
17/17 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0569 - val_loss: 0.0156 - val_mse: 0.0156 - val_mae: 0.0983 - lr: 1.0000e-05 - 143ms/epoch - 8ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.01222
17/17 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0579 - val_loss: 0.0157 - val_mse: 0.0157 - val_mae: 0.0984 - lr: 1.0000e-05 - 134ms/epoch - 8ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.01222
17/17 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0568 - val_loss: 0.0157 - val_mse: 0.0157 - val_mae: 0.0984 - lr: 1.0000e-05 - 146ms/epoch - 9ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.01222
17/17 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0560 - val_loss: 0.0157 - val_mse: 0.0157 - val_mae: 0.0984 - lr: 1.0000e-05 - 134ms/epoch - 8ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.01222
17/17 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0598 - val_loss: 0.0157 - val_mse: 0.0157 - val_mae: 0.0985 - lr: 1.0000e-05 - 146ms/epoch - 9ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.01222
17/17 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0594 - val_loss: 0.0157 - val_mse: 0.0157 - val_mae: 0.0984 - lr: 1.0000e-05 - 133ms/epoch - 8ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.01222
17/17 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0593 - val_loss: 0.0157 - val_mse: 0.0157 - val_mae: 0.0985 - lr: 1.0000e-05 - 137ms/epoch - 8ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.01222
17/17 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0577 - val_loss: 0.0158 - val_mse: 0.0158 - val_mae: 0.0986 - lr: 1.0000e-05 - 144ms/epoch - 8ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.01222
17/17 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0584 - val_loss: 0.0158 - val_mse: 0.0158 - val_mae: 0.0987 - lr: 1.0000e-05 - 143ms/epoch - 8ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.01222
17/17 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0566 - val_loss: 0.0158 - val_mse: 0.0158 - val_mae: 0.0987 - lr: 1.0000e-05 - 145ms/epoch - 9ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.01222
17/17 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0563 - val_loss: 0.0158 - val_mse: 0.0158 - val_mae: 0.0987 - lr: 1.0000e-05 - 179ms/epoch - 11ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.01222
17/17 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0579 - val_loss: 0.0159 - val_mse: 0.0159 - val_mae: 0.0989 - lr: 1.0000e-05 - 140ms/epoch - 8ms/step
Epoch 56/500

Epoch 00056: val_loss did not improve from 0.01222
17/17 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0554 - val_loss: 0.0160 - val_mse: 0.0160 - val_mae: 0.0991 - lr: 1.0000e-05 - 143ms/epoch - 8ms/step
Epoch 57/500

Epoch 00057: val_loss did not improve from 0.01222
17/17 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0567 - val_loss: 0.0160 - val_mse: 0.0160 - val_mae: 0.0992 - lr: 1.0000e-05 - 145ms/epoch - 9ms/step
Epoch 58/500

Epoch 00058: val_loss did not improve from 0.01222
17/17 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0589 - val_loss: 0.0160 - val_mse: 0.0160 - val_mae: 0.0992 - lr: 1.0000e-05 - 142ms/epoch - 8ms/step
Epoch 59/500

Epoch 00059: val_loss did not improve from 0.01222
17/17 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0553 - val_loss: 0.0160 - val_mse: 0.0160 - val_mae: 0.0992 - lr: 1.0000e-05 - 145ms/epoch - 9ms/step
Epoch 60/500

Epoch 00060: val_loss did not improve from 0.01222
17/17 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0597 - val_loss: 0.0161 - val_mse: 0.0161 - val_mae: 0.0993 - lr: 1.0000e-05 - 139ms/epoch - 8ms/step
Epoch 61/500

Epoch 00061: val_loss did not improve from 0.01222
17/17 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0569 - val_loss: 0.0161 - val_mse: 0.0161 - val_mae: 0.0994 - lr: 1.0000e-05 - 190ms/epoch - 11ms/step
Epoch 62/500

Epoch 00062: val_loss did not improve from 0.01222
17/17 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0599 - val_loss: 0.0161 - val_mse: 0.0161 - val_mae: 0.0994 - lr: 1.0000e-05 - 143ms/epoch - 8ms/step
Epoch 00062: early stopping
SMA
Prediction vs Close:		52.61% Accuracy
Prediction vs Prediction:	50.75% Accuracy
MSE:	 60.72793966434877 
RMSE:	 7.792813334370892 
MAPE:	 6.245110563496496

EMA
Prediction vs Close:		53.36% Accuracy
Prediction vs Prediction:	49.63% Accuracy
MSE:	 24.31424046870049 
RMSE:	 4.930947218202654 
MAPE:	 4.072073008160127

WMA
Prediction vs Close:		53.36% Accuracy
Prediction vs Prediction:	46.27% Accuracy
MSE:	 68.15170187847146 
RMSE:	 8.255404404296101 
MAPE:	 6.806281257852447
DEMA
DEMA([input_arrays], [timeperiod=30])

Double Exponential Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
89

Working on DEMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.58 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4436.126, Time=0.04 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3965.317, Time=0.06 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.53 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3736.589, Time=0.09 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3598.951, Time=0.11 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=1.23 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=1.25 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3600.951, Time=0.27 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 4.158 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1795.475
Date:                Sun, 12 Dec 2021   AIC                           3598.951
Time:                        12:46:42   BIC                           3617.714
Sample:                             0   HQIC                          3606.157
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1983      0.003   -389.581      0.000      -1.204      -1.192
ar.L2         -0.8973      0.006   -139.732      0.000      -0.910      -0.885
ar.L3         -0.3983      0.006    -68.649      0.000      -0.410      -0.387
sigma2         5.0573      0.023    215.292      0.000       5.011       5.103
===================================================================================
Ljung-Box (L1) (Q):                  14.41   Jarque-Bera (JB):           2460553.80
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                             3.89
Prob(H) (two-sided):                  0.00   Kurtosis:                       273.74
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.27877, saving model to LSTM3.h5
10/10 - 3s - loss: 0.7021 - mse: 0.7021 - mae: 0.7244 - val_loss: 0.2788 - val_mse: 0.2788 - val_mae: 0.4616 - lr: 0.0010 - 3s/epoch - 297ms/step
Epoch 2/500

Epoch 00002: val_loss improved from 0.27877 to 0.16710, saving model to LSTM3.h5
10/10 - 0s - loss: 0.0997 - mse: 0.0997 - mae: 0.2588 - val_loss: 0.1671 - val_mse: 0.1671 - val_mae: 0.3443 - lr: 0.0010 - 117ms/epoch - 12ms/step
Epoch 3/500

Epoch 00003: val_loss improved from 0.16710 to 0.13466, saving model to LSTM3.h5
10/10 - 0s - loss: 0.0339 - mse: 0.0339 - mae: 0.1506 - val_loss: 0.1347 - val_mse: 0.1347 - val_mae: 0.3038 - lr: 0.0010 - 114ms/epoch - 11ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.13466
10/10 - 0s - loss: 0.0374 - mse: 0.0374 - mae: 0.1632 - val_loss: 0.1425 - val_mse: 0.1425 - val_mae: 0.3146 - lr: 0.0010 - 93ms/epoch - 9ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.13466
10/10 - 0s - loss: 0.0234 - mse: 0.0234 - mae: 0.1232 - val_loss: 0.1594 - val_mse: 0.1594 - val_mae: 0.3359 - lr: 0.0010 - 97ms/epoch - 10ms/step
Epoch 6/500

Epoch 00006: val_loss did not improve from 0.13466
10/10 - 0s - loss: 0.0177 - mse: 0.0177 - mae: 0.1065 - val_loss: 0.1713 - val_mse: 0.1713 - val_mae: 0.3504 - lr: 0.0010 - 100ms/epoch - 10ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.13466
10/10 - 0s - loss: 0.0197 - mse: 0.0197 - mae: 0.1111 - val_loss: 0.1758 - val_mse: 0.1758 - val_mae: 0.3559 - lr: 0.0010 - 96ms/epoch - 10ms/step
Epoch 8/500

Epoch 00008: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00008: val_loss did not improve from 0.13466
10/10 - 0s - loss: 0.0170 - mse: 0.0170 - mae: 0.1021 - val_loss: 0.1782 - val_mse: 0.1782 - val_mae: 0.3590 - lr: 0.0010 - 95ms/epoch - 10ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.13466
10/10 - 0s - loss: 0.0160 - mse: 0.0160 - mae: 0.1004 - val_loss: 0.1781 - val_mse: 0.1781 - val_mae: 0.3590 - lr: 1.0000e-04 - 103ms/epoch - 10ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.13466
10/10 - 0s - loss: 0.0166 - mse: 0.0166 - mae: 0.1016 - val_loss: 0.1778 - val_mse: 0.1778 - val_mae: 0.3586 - lr: 1.0000e-04 - 102ms/epoch - 10ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.13466
10/10 - 0s - loss: 0.0164 - mse: 0.0164 - mae: 0.1015 - val_loss: 0.1776 - val_mse: 0.1776 - val_mae: 0.3583 - lr: 1.0000e-04 - 91ms/epoch - 9ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.13466
10/10 - 0s - loss: 0.0164 - mse: 0.0164 - mae: 0.1014 - val_loss: 0.1773 - val_mse: 0.1773 - val_mae: 0.3580 - lr: 1.0000e-04 - 98ms/epoch - 10ms/step
Epoch 13/500

Epoch 00013: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00013: val_loss did not improve from 0.13466
10/10 - 0s - loss: 0.0157 - mse: 0.0157 - mae: 0.0999 - val_loss: 0.1774 - val_mse: 0.1774 - val_mae: 0.3581 - lr: 1.0000e-04 - 95ms/epoch - 9ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.13466
10/10 - 0s - loss: 0.0149 - mse: 0.0149 - mae: 0.0967 - val_loss: 0.1774 - val_mse: 0.1774 - val_mae: 0.3581 - lr: 1.0000e-05 - 94ms/epoch - 9ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.13466
10/10 - 0s - loss: 0.0153 - mse: 0.0153 - mae: 0.0972 - val_loss: 0.1774 - val_mse: 0.1774 - val_mae: 0.3581 - lr: 1.0000e-05 - 98ms/epoch - 10ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.13466
10/10 - 0s - loss: 0.0157 - mse: 0.0157 - mae: 0.0994 - val_loss: 0.1774 - val_mse: 0.1774 - val_mae: 0.3581 - lr: 1.0000e-05 - 104ms/epoch - 10ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.13466
10/10 - 0s - loss: 0.0158 - mse: 0.0158 - mae: 0.0982 - val_loss: 0.1774 - val_mse: 0.1774 - val_mae: 0.3582 - lr: 1.0000e-05 - 134ms/epoch - 13ms/step
Epoch 18/500

Epoch 00018: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00018: val_loss did not improve from 0.13466
10/10 - 0s - loss: 0.0138 - mse: 0.0138 - mae: 0.0920 - val_loss: 0.1775 - val_mse: 0.1775 - val_mae: 0.3582 - lr: 1.0000e-05 - 100ms/epoch - 10ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.13466
10/10 - 0s - loss: 0.0157 - mse: 0.0157 - mae: 0.0978 - val_loss: 0.1774 - val_mse: 0.1774 - val_mae: 0.3582 - lr: 1.0000e-05 - 95ms/epoch - 10ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.13466
10/10 - 0s - loss: 0.0154 - mse: 0.0154 - mae: 0.0989 - val_loss: 0.1775 - val_mse: 0.1775 - val_mae: 0.3582 - lr: 1.0000e-05 - 96ms/epoch - 10ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.13466
10/10 - 0s - loss: 0.0163 - mse: 0.0163 - mae: 0.1008 - val_loss: 0.1774 - val_mse: 0.1774 - val_mae: 0.3582 - lr: 1.0000e-05 - 95ms/epoch - 10ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.13466
10/10 - 0s - loss: 0.0164 - mse: 0.0164 - mae: 0.1016 - val_loss: 0.1774 - val_mse: 0.1774 - val_mae: 0.3582 - lr: 1.0000e-05 - 87ms/epoch - 9ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.13466
10/10 - 0s - loss: 0.0153 - mse: 0.0153 - mae: 0.0977 - val_loss: 0.1774 - val_mse: 0.1774 - val_mae: 0.3582 - lr: 1.0000e-05 - 90ms/epoch - 9ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.13466
10/10 - 0s - loss: 0.0169 - mse: 0.0169 - mae: 0.1015 - val_loss: 0.1774 - val_mse: 0.1774 - val_mae: 0.3582 - lr: 1.0000e-05 - 96ms/epoch - 10ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.13466
10/10 - 0s - loss: 0.0159 - mse: 0.0159 - mae: 0.1013 - val_loss: 0.1774 - val_mse: 0.1774 - val_mae: 0.3582 - lr: 1.0000e-05 - 92ms/epoch - 9ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.13466
10/10 - 0s - loss: 0.0152 - mse: 0.0152 - mae: 0.0978 - val_loss: 0.1774 - val_mse: 0.1774 - val_mae: 0.3582 - lr: 1.0000e-05 - 89ms/epoch - 9ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.13466
10/10 - 0s - loss: 0.0160 - mse: 0.0160 - mae: 0.0991 - val_loss: 0.1774 - val_mse: 0.1774 - val_mae: 0.3581 - lr: 1.0000e-05 - 92ms/epoch - 9ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.13466
10/10 - 0s - loss: 0.0153 - mse: 0.0153 - mae: 0.0991 - val_loss: 0.1773 - val_mse: 0.1773 - val_mae: 0.3581 - lr: 1.0000e-05 - 92ms/epoch - 9ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.13466
10/10 - 0s - loss: 0.0157 - mse: 0.0157 - mae: 0.0995 - val_loss: 0.1773 - val_mse: 0.1773 - val_mae: 0.3581 - lr: 1.0000e-05 - 97ms/epoch - 10ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.13466
10/10 - 0s - loss: 0.0148 - mse: 0.0148 - mae: 0.0973 - val_loss: 0.1773 - val_mse: 0.1773 - val_mae: 0.3581 - lr: 1.0000e-05 - 100ms/epoch - 10ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.13466
10/10 - 0s - loss: 0.0145 - mse: 0.0145 - mae: 0.0945 - val_loss: 0.1774 - val_mse: 0.1774 - val_mae: 0.3581 - lr: 1.0000e-05 - 96ms/epoch - 10ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.13466
10/10 - 0s - loss: 0.0154 - mse: 0.0154 - mae: 0.0973 - val_loss: 0.1774 - val_mse: 0.1774 - val_mae: 0.3582 - lr: 1.0000e-05 - 136ms/epoch - 14ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.13466
10/10 - 0s - loss: 0.0175 - mse: 0.0175 - mae: 0.1044 - val_loss: 0.1774 - val_mse: 0.1774 - val_mae: 0.3582 - lr: 1.0000e-05 - 99ms/epoch - 10ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.13466
10/10 - 0s - loss: 0.0151 - mse: 0.0151 - mae: 0.0962 - val_loss: 0.1775 - val_mse: 0.1775 - val_mae: 0.3583 - lr: 1.0000e-05 - 132ms/epoch - 13ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.13466
10/10 - 0s - loss: 0.0164 - mse: 0.0164 - mae: 0.0994 - val_loss: 0.1775 - val_mse: 0.1775 - val_mae: 0.3583 - lr: 1.0000e-05 - 94ms/epoch - 9ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.13466
10/10 - 0s - loss: 0.0152 - mse: 0.0152 - mae: 0.0971 - val_loss: 0.1775 - val_mse: 0.1775 - val_mae: 0.3583 - lr: 1.0000e-05 - 91ms/epoch - 9ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.13466
10/10 - 0s - loss: 0.0148 - mse: 0.0148 - mae: 0.0950 - val_loss: 0.1775 - val_mse: 0.1775 - val_mae: 0.3583 - lr: 1.0000e-05 - 112ms/epoch - 11ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.13466
10/10 - 0s - loss: 0.0148 - mse: 0.0148 - mae: 0.0953 - val_loss: 0.1774 - val_mse: 0.1774 - val_mae: 0.3582 - lr: 1.0000e-05 - 101ms/epoch - 10ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.13466
10/10 - 0s - loss: 0.0147 - mse: 0.0147 - mae: 0.0948 - val_loss: 0.1774 - val_mse: 0.1774 - val_mae: 0.3582 - lr: 1.0000e-05 - 104ms/epoch - 10ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.13466
10/10 - 0s - loss: 0.0150 - mse: 0.0150 - mae: 0.0976 - val_loss: 0.1774 - val_mse: 0.1774 - val_mae: 0.3582 - lr: 1.0000e-05 - 95ms/epoch - 9ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.13466
10/10 - 0s - loss: 0.0151 - mse: 0.0151 - mae: 0.0961 - val_loss: 0.1774 - val_mse: 0.1774 - val_mae: 0.3582 - lr: 1.0000e-05 - 100ms/epoch - 10ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.13466
10/10 - 0s - loss: 0.0163 - mse: 0.0163 - mae: 0.1008 - val_loss: 0.1774 - val_mse: 0.1774 - val_mae: 0.3582 - lr: 1.0000e-05 - 140ms/epoch - 14ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.13466
10/10 - 0s - loss: 0.0161 - mse: 0.0161 - mae: 0.0997 - val_loss: 0.1775 - val_mse: 0.1775 - val_mae: 0.3583 - lr: 1.0000e-05 - 96ms/epoch - 10ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.13466
10/10 - 0s - loss: 0.0145 - mse: 0.0145 - mae: 0.0941 - val_loss: 0.1775 - val_mse: 0.1775 - val_mae: 0.3584 - lr: 1.0000e-05 - 98ms/epoch - 10ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.13466
10/10 - 0s - loss: 0.0158 - mse: 0.0158 - mae: 0.0993 - val_loss: 0.1776 - val_mse: 0.1776 - val_mae: 0.3584 - lr: 1.0000e-05 - 104ms/epoch - 10ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.13466
10/10 - 0s - loss: 0.0161 - mse: 0.0161 - mae: 0.0998 - val_loss: 0.1776 - val_mse: 0.1776 - val_mae: 0.3585 - lr: 1.0000e-05 - 115ms/epoch - 11ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.13466
10/10 - 0s - loss: 0.0153 - mse: 0.0153 - mae: 0.0972 - val_loss: 0.1776 - val_mse: 0.1776 - val_mae: 0.3585 - lr: 1.0000e-05 - 111ms/epoch - 11ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.13466
10/10 - 0s - loss: 0.0158 - mse: 0.0158 - mae: 0.0990 - val_loss: 0.1775 - val_mse: 0.1775 - val_mae: 0.3584 - lr: 1.0000e-05 - 112ms/epoch - 11ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.13466
10/10 - 0s - loss: 0.0147 - mse: 0.0147 - mae: 0.0938 - val_loss: 0.1775 - val_mse: 0.1775 - val_mae: 0.3584 - lr: 1.0000e-05 - 101ms/epoch - 10ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.13466
10/10 - 0s - loss: 0.0155 - mse: 0.0155 - mae: 0.0990 - val_loss: 0.1775 - val_mse: 0.1775 - val_mae: 0.3584 - lr: 1.0000e-05 - 138ms/epoch - 14ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.13466
10/10 - 0s - loss: 0.0149 - mse: 0.0149 - mae: 0.0980 - val_loss: 0.1775 - val_mse: 0.1775 - val_mae: 0.3584 - lr: 1.0000e-05 - 94ms/epoch - 9ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.13466
10/10 - 0s - loss: 0.0140 - mse: 0.0140 - mae: 0.0938 - val_loss: 0.1775 - val_mse: 0.1775 - val_mae: 0.3584 - lr: 1.0000e-05 - 136ms/epoch - 14ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.13466
10/10 - 0s - loss: 0.0145 - mse: 0.0145 - mae: 0.0969 - val_loss: 0.1775 - val_mse: 0.1775 - val_mae: 0.3584 - lr: 1.0000e-05 - 98ms/epoch - 10ms/step
Epoch 00053: early stopping
SMA
Prediction vs Close:		52.61% Accuracy
Prediction vs Prediction:	50.75% Accuracy
MSE:	 60.72793966434877 
RMSE:	 7.792813334370892 
MAPE:	 6.245110563496496

EMA
Prediction vs Close:		53.36% Accuracy
Prediction vs Prediction:	49.63% Accuracy
MSE:	 24.31424046870049 
RMSE:	 4.930947218202654 
MAPE:	 4.072073008160127

WMA
Prediction vs Close:		53.36% Accuracy
Prediction vs Prediction:	46.27% Accuracy
MSE:	 68.15170187847146 
RMSE:	 8.255404404296101 
MAPE:	 6.806281257852447

DEMA
Prediction vs Close:		51.49% Accuracy
Prediction vs Prediction:	48.51% Accuracy
MSE:	 43.922177447330554 
RMSE:	 6.627380888958364 
MAPE:	 5.414540694927783
KAMA
KAMA([input_arrays], [timeperiod=30])

Kaufman Adaptive Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
18

Working on KAMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.49 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4190.464, Time=0.04 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3724.371, Time=0.06 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.38 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3494.154, Time=0.10 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3357.435, Time=0.14 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=1.57 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=0.98 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3359.435, Time=0.28 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 4.053 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1674.717
Date:                Sun, 12 Dec 2021   AIC                           3357.435
Time:                        12:48:18   BIC                           3376.198
Sample:                             0   HQIC                          3364.641
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1955      0.003   -381.246      0.000      -1.202      -1.189
ar.L2         -0.8964      0.007   -135.835      0.000      -0.909      -0.883
ar.L3         -0.3971      0.006    -67.229      0.000      -0.409      -0.385
sigma2         3.7466      0.018    211.623      0.000       3.712       3.781
===================================================================================
Ljung-Box (L1) (Q):                  14.20   Jarque-Bera (JB):           2338363.32
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.01   Skew:                             3.76
Prob(H) (two-sided):                  0.00   Kurtosis:                       266.93
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.39067, saving model to LSTM3.h5
45/45 - 3s - loss: 0.1630 - mse: 0.1630 - mae: 0.3178 - val_loss: 0.3907 - val_mse: 0.3907 - val_mae: 0.5781 - lr: 0.0010 - 3s/epoch - 71ms/step
Epoch 2/500

Epoch 00002: val_loss improved from 0.39067 to 0.11867, saving model to LSTM3.h5
45/45 - 0s - loss: 0.0775 - mse: 0.0775 - mae: 0.2131 - val_loss: 0.1187 - val_mse: 0.1187 - val_mae: 0.2877 - lr: 0.0010 - 311ms/epoch - 7ms/step
Epoch 3/500

Epoch 00003: val_loss improved from 0.11867 to 0.09234, saving model to LSTM3.h5
45/45 - 0s - loss: 0.0253 - mse: 0.0253 - mae: 0.1252 - val_loss: 0.0923 - val_mse: 0.0923 - val_mae: 0.2457 - lr: 0.0010 - 330ms/epoch - 7ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.09234
45/45 - 0s - loss: 0.0167 - mse: 0.0167 - mae: 0.1023 - val_loss: 0.1103 - val_mse: 0.1103 - val_mae: 0.2789 - lr: 0.0010 - 308ms/epoch - 7ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.09234
45/45 - 0s - loss: 0.0149 - mse: 0.0149 - mae: 0.0963 - val_loss: 0.1277 - val_mse: 0.1277 - val_mae: 0.3071 - lr: 0.0010 - 303ms/epoch - 7ms/step
Epoch 6/500

Epoch 00006: val_loss did not improve from 0.09234
45/45 - 0s - loss: 0.0150 - mse: 0.0150 - mae: 0.0949 - val_loss: 0.1309 - val_mse: 0.1309 - val_mae: 0.3139 - lr: 0.0010 - 324ms/epoch - 7ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.09234
45/45 - 0s - loss: 0.0141 - mse: 0.0141 - mae: 0.0921 - val_loss: 0.1362 - val_mse: 0.1362 - val_mae: 0.3233 - lr: 0.0010 - 324ms/epoch - 7ms/step
Epoch 8/500

Epoch 00008: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00008: val_loss did not improve from 0.09234
45/45 - 0s - loss: 0.0126 - mse: 0.0126 - mae: 0.0863 - val_loss: 0.1368 - val_mse: 0.1368 - val_mae: 0.3251 - lr: 0.0010 - 336ms/epoch - 7ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.09234
45/45 - 0s - loss: 0.0213 - mse: 0.0213 - mae: 0.1182 - val_loss: 0.1069 - val_mse: 0.1069 - val_mae: 0.2808 - lr: 1.0000e-04 - 342ms/epoch - 8ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.09234
45/45 - 0s - loss: 0.0084 - mse: 0.0084 - mae: 0.0737 - val_loss: 0.1024 - val_mse: 0.1024 - val_mae: 0.2733 - lr: 1.0000e-04 - 300ms/epoch - 7ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.09234
45/45 - 0s - loss: 0.0070 - mse: 0.0070 - mae: 0.0662 - val_loss: 0.1019 - val_mse: 0.1019 - val_mae: 0.2722 - lr: 1.0000e-04 - 311ms/epoch - 7ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.09234
45/45 - 0s - loss: 0.0068 - mse: 0.0068 - mae: 0.0654 - val_loss: 0.1024 - val_mse: 0.1024 - val_mae: 0.2729 - lr: 1.0000e-04 - 302ms/epoch - 7ms/step
Epoch 13/500

Epoch 00013: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00013: val_loss did not improve from 0.09234
45/45 - 0s - loss: 0.0075 - mse: 0.0075 - mae: 0.0691 - val_loss: 0.1030 - val_mse: 0.1030 - val_mae: 0.2738 - lr: 1.0000e-04 - 292ms/epoch - 6ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.09234
45/45 - 0s - loss: 0.0069 - mse: 0.0069 - mae: 0.0662 - val_loss: 0.1030 - val_mse: 0.1030 - val_mae: 0.2738 - lr: 1.0000e-05 - 305ms/epoch - 7ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.09234
45/45 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0625 - val_loss: 0.1031 - val_mse: 0.1031 - val_mae: 0.2739 - lr: 1.0000e-05 - 315ms/epoch - 7ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.09234
45/45 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0627 - val_loss: 0.1030 - val_mse: 0.1030 - val_mae: 0.2737 - lr: 1.0000e-05 - 296ms/epoch - 7ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.09234
45/45 - 0s - loss: 0.0072 - mse: 0.0072 - mae: 0.0675 - val_loss: 0.1029 - val_mse: 0.1029 - val_mae: 0.2735 - lr: 1.0000e-05 - 327ms/epoch - 7ms/step
Epoch 18/500

Epoch 00018: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00018: val_loss did not improve from 0.09234
45/45 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0628 - val_loss: 0.1029 - val_mse: 0.1029 - val_mae: 0.2736 - lr: 1.0000e-05 - 318ms/epoch - 7ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.09234
45/45 - 0s - loss: 0.0068 - mse: 0.0068 - mae: 0.0648 - val_loss: 0.1029 - val_mse: 0.1029 - val_mae: 0.2735 - lr: 1.0000e-05 - 306ms/epoch - 7ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.09234
45/45 - 0s - loss: 0.0069 - mse: 0.0069 - mae: 0.0652 - val_loss: 0.1029 - val_mse: 0.1029 - val_mae: 0.2735 - lr: 1.0000e-05 - 303ms/epoch - 7ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.09234
45/45 - 0s - loss: 0.0070 - mse: 0.0070 - mae: 0.0659 - val_loss: 0.1032 - val_mse: 0.1032 - val_mae: 0.2739 - lr: 1.0000e-05 - 297ms/epoch - 7ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.09234
45/45 - 0s - loss: 0.0067 - mse: 0.0067 - mae: 0.0638 - val_loss: 0.1033 - val_mse: 0.1033 - val_mae: 0.2741 - lr: 1.0000e-05 - 320ms/epoch - 7ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.09234
45/45 - 0s - loss: 0.0066 - mse: 0.0066 - mae: 0.0640 - val_loss: 0.1034 - val_mse: 0.1034 - val_mae: 0.2743 - lr: 1.0000e-05 - 325ms/epoch - 7ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.09234
45/45 - 0s - loss: 0.0068 - mse: 0.0068 - mae: 0.0643 - val_loss: 0.1033 - val_mse: 0.1033 - val_mae: 0.2741 - lr: 1.0000e-05 - 307ms/epoch - 7ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.09234
45/45 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0634 - val_loss: 0.1035 - val_mse: 0.1035 - val_mae: 0.2744 - lr: 1.0000e-05 - 318ms/epoch - 7ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.09234
45/45 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0630 - val_loss: 0.1039 - val_mse: 0.1039 - val_mae: 0.2751 - lr: 1.0000e-05 - 324ms/epoch - 7ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.09234
45/45 - 0s - loss: 0.0066 - mse: 0.0066 - mae: 0.0646 - val_loss: 0.1042 - val_mse: 0.1042 - val_mae: 0.2755 - lr: 1.0000e-05 - 292ms/epoch - 6ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.09234
45/45 - 0s - loss: 0.0067 - mse: 0.0067 - mae: 0.0647 - val_loss: 0.1045 - val_mse: 0.1045 - val_mae: 0.2759 - lr: 1.0000e-05 - 329ms/epoch - 7ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.09234
45/45 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0633 - val_loss: 0.1044 - val_mse: 0.1044 - val_mae: 0.2758 - lr: 1.0000e-05 - 301ms/epoch - 7ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.09234
45/45 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0615 - val_loss: 0.1045 - val_mse: 0.1045 - val_mae: 0.2760 - lr: 1.0000e-05 - 297ms/epoch - 7ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.09234
45/45 - 0s - loss: 0.0068 - mse: 0.0068 - mae: 0.0654 - val_loss: 0.1049 - val_mse: 0.1049 - val_mae: 0.2766 - lr: 1.0000e-05 - 323ms/epoch - 7ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.09234
45/45 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0625 - val_loss: 0.1053 - val_mse: 0.1053 - val_mae: 0.2773 - lr: 1.0000e-05 - 314ms/epoch - 7ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.09234
45/45 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0608 - val_loss: 0.1054 - val_mse: 0.1054 - val_mae: 0.2774 - lr: 1.0000e-05 - 310ms/epoch - 7ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.09234
45/45 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0618 - val_loss: 0.1056 - val_mse: 0.1056 - val_mae: 0.2777 - lr: 1.0000e-05 - 305ms/epoch - 7ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.09234
45/45 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0616 - val_loss: 0.1057 - val_mse: 0.1057 - val_mae: 0.2778 - lr: 1.0000e-05 - 310ms/epoch - 7ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.09234
45/45 - 0s - loss: 0.0068 - mse: 0.0068 - mae: 0.0661 - val_loss: 0.1058 - val_mse: 0.1058 - val_mae: 0.2781 - lr: 1.0000e-05 - 310ms/epoch - 7ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.09234
45/45 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0616 - val_loss: 0.1059 - val_mse: 0.1059 - val_mae: 0.2781 - lr: 1.0000e-05 - 312ms/epoch - 7ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.09234
45/45 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0615 - val_loss: 0.1061 - val_mse: 0.1061 - val_mae: 0.2785 - lr: 1.0000e-05 - 302ms/epoch - 7ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.09234
45/45 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0617 - val_loss: 0.1063 - val_mse: 0.1063 - val_mae: 0.2788 - lr: 1.0000e-05 - 317ms/epoch - 7ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.09234
45/45 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0635 - val_loss: 0.1066 - val_mse: 0.1066 - val_mae: 0.2792 - lr: 1.0000e-05 - 293ms/epoch - 7ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.09234
45/45 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0600 - val_loss: 0.1067 - val_mse: 0.1067 - val_mae: 0.2794 - lr: 1.0000e-05 - 335ms/epoch - 7ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.09234
45/45 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0603 - val_loss: 0.1066 - val_mse: 0.1066 - val_mae: 0.2793 - lr: 1.0000e-05 - 307ms/epoch - 7ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.09234
45/45 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0608 - val_loss: 0.1067 - val_mse: 0.1067 - val_mae: 0.2793 - lr: 1.0000e-05 - 315ms/epoch - 7ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.09234
45/45 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0605 - val_loss: 0.1067 - val_mse: 0.1067 - val_mae: 0.2793 - lr: 1.0000e-05 - 318ms/epoch - 7ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.09234
45/45 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0625 - val_loss: 0.1068 - val_mse: 0.1068 - val_mae: 0.2794 - lr: 1.0000e-05 - 323ms/epoch - 7ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.09234
45/45 - 0s - loss: 0.0066 - mse: 0.0066 - mae: 0.0647 - val_loss: 0.1069 - val_mse: 0.1069 - val_mae: 0.2797 - lr: 1.0000e-05 - 323ms/epoch - 7ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.09234
45/45 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0621 - val_loss: 0.1074 - val_mse: 0.1074 - val_mae: 0.2805 - lr: 1.0000e-05 - 311ms/epoch - 7ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.09234
45/45 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0633 - val_loss: 0.1079 - val_mse: 0.1079 - val_mae: 0.2812 - lr: 1.0000e-05 - 314ms/epoch - 7ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.09234
45/45 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0603 - val_loss: 0.1082 - val_mse: 0.1082 - val_mae: 0.2817 - lr: 1.0000e-05 - 321ms/epoch - 7ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.09234
45/45 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0611 - val_loss: 0.1087 - val_mse: 0.1087 - val_mae: 0.2825 - lr: 1.0000e-05 - 320ms/epoch - 7ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.09234
45/45 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0627 - val_loss: 0.1088 - val_mse: 0.1088 - val_mae: 0.2826 - lr: 1.0000e-05 - 312ms/epoch - 7ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.09234
45/45 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0626 - val_loss: 0.1090 - val_mse: 0.1090 - val_mae: 0.2830 - lr: 1.0000e-05 - 334ms/epoch - 7ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.09234
45/45 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0614 - val_loss: 0.1092 - val_mse: 0.1092 - val_mae: 0.2833 - lr: 1.0000e-05 - 306ms/epoch - 7ms/step
Epoch 00053: early stopping
SMA
Prediction vs Close:		52.61% Accuracy
Prediction vs Prediction:	50.75% Accuracy
MSE:	 60.72793966434877 
RMSE:	 7.792813334370892 
MAPE:	 6.245110563496496

EMA
Prediction vs Close:		53.36% Accuracy
Prediction vs Prediction:	49.63% Accuracy
MSE:	 24.31424046870049 
RMSE:	 4.930947218202654 
MAPE:	 4.072073008160127

WMA
Prediction vs Close:		53.36% Accuracy
Prediction vs Prediction:	46.27% Accuracy
MSE:	 68.15170187847146 
RMSE:	 8.255404404296101 
MAPE:	 6.806281257852447

DEMA
Prediction vs Close:		51.49% Accuracy
Prediction vs Prediction:	48.51% Accuracy
MSE:	 43.922177447330554 
RMSE:	 6.627380888958364 
MAPE:	 5.414540694927783

KAMA
Prediction vs Close:		56.34% Accuracy
Prediction vs Prediction:	49.25% Accuracy
MSE:	 23.99643218488633 
RMSE:	 4.8986153334270215 
MAPE:	 3.8674202618000764
MIDPOINT
MIDPOINT([input_arrays], [timeperiod=14])

MidPoint over period (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 14
Outputs:
    real
14

Working on MIDPOINT predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.51 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4212.289, Time=0.04 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3747.746, Time=0.06 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.33 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3523.401, Time=0.10 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3387.759, Time=0.13 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=1.65 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=1.15 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3389.758, Time=0.26 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 4.247 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1689.879
Date:                Sun, 12 Dec 2021   AIC                           3387.759
Time:                        12:50:11   BIC                           3406.522
Sample:                             0   HQIC                          3394.964
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1878      0.003   -345.315      0.000      -1.195      -1.181
ar.L2         -0.8876      0.007   -121.809      0.000      -0.902      -0.873
ar.L3         -0.3957      0.007    -60.127      0.000      -0.409      -0.383
sigma2         3.8904      0.020    193.404      0.000       3.851       3.930
===================================================================================
Ljung-Box (L1) (Q):                  13.21   Jarque-Bera (JB):           1659080.01
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.08   Skew:                             3.28
Prob(H) (two-sided):                  0.00   Kurtosis:                       225.31
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.29190, saving model to LSTM3.h5
58/58 - 3s - loss: 0.2526 - mse: 0.2526 - mae: 0.3822 - val_loss: 0.2919 - val_mse: 0.2919 - val_mae: 0.4836 - lr: 0.0010 - 3s/epoch - 58ms/step
Epoch 2/500

Epoch 00002: val_loss improved from 0.29190 to 0.09505, saving model to LSTM3.h5
58/58 - 0s - loss: 0.0489 - mse: 0.0489 - mae: 0.1754 - val_loss: 0.0950 - val_mse: 0.0950 - val_mae: 0.2416 - lr: 0.0010 - 415ms/epoch - 7ms/step
Epoch 3/500

Epoch 00003: val_loss improved from 0.09505 to 0.05068, saving model to LSTM3.h5
58/58 - 0s - loss: 0.0288 - mse: 0.0288 - mae: 0.1352 - val_loss: 0.0507 - val_mse: 0.0507 - val_mae: 0.1643 - lr: 0.0010 - 408ms/epoch - 7ms/step
Epoch 4/500

Epoch 00004: val_loss improved from 0.05068 to 0.03884, saving model to LSTM3.h5
58/58 - 0s - loss: 0.0198 - mse: 0.0198 - mae: 0.1134 - val_loss: 0.0388 - val_mse: 0.0388 - val_mae: 0.1437 - lr: 0.0010 - 395ms/epoch - 7ms/step
Epoch 5/500

Epoch 00005: val_loss improved from 0.03884 to 0.03849, saving model to LSTM3.h5
58/58 - 0s - loss: 0.0147 - mse: 0.0147 - mae: 0.0983 - val_loss: 0.0385 - val_mse: 0.0385 - val_mae: 0.1416 - lr: 0.0010 - 413ms/epoch - 7ms/step
Epoch 6/500

Epoch 00006: val_loss did not improve from 0.03849
58/58 - 0s - loss: 0.0126 - mse: 0.0126 - mae: 0.0897 - val_loss: 0.0468 - val_mse: 0.0468 - val_mae: 0.1582 - lr: 0.0010 - 366ms/epoch - 6ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.03849
58/58 - 0s - loss: 0.0106 - mse: 0.0106 - mae: 0.0803 - val_loss: 0.0513 - val_mse: 0.0513 - val_mae: 0.1701 - lr: 0.0010 - 389ms/epoch - 7ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.03849
58/58 - 0s - loss: 0.0103 - mse: 0.0103 - mae: 0.0789 - val_loss: 0.0522 - val_mse: 0.0522 - val_mae: 0.1758 - lr: 0.0010 - 368ms/epoch - 6ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.03849
58/58 - 0s - loss: 0.0087 - mse: 0.0087 - mae: 0.0724 - val_loss: 0.0560 - val_mse: 0.0560 - val_mae: 0.1883 - lr: 0.0010 - 382ms/epoch - 7ms/step
Epoch 10/500

Epoch 00010: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00010: val_loss did not improve from 0.03849
58/58 - 0s - loss: 0.0092 - mse: 0.0092 - mae: 0.0755 - val_loss: 0.0545 - val_mse: 0.0545 - val_mae: 0.1871 - lr: 0.0010 - 387ms/epoch - 7ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.03849
58/58 - 0s - loss: 0.0151 - mse: 0.0151 - mae: 0.1014 - val_loss: 0.0397 - val_mse: 0.0397 - val_mae: 0.1493 - lr: 1.0000e-04 - 358ms/epoch - 6ms/step
Epoch 12/500

Epoch 00012: val_loss improved from 0.03849 to 0.03841, saving model to LSTM3.h5
58/58 - 0s - loss: 0.0082 - mse: 0.0082 - mae: 0.0727 - val_loss: 0.0384 - val_mse: 0.0384 - val_mae: 0.1459 - lr: 1.0000e-04 - 422ms/epoch - 7ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.03841
58/58 - 0s - loss: 0.0077 - mse: 0.0077 - mae: 0.0706 - val_loss: 0.0389 - val_mse: 0.0389 - val_mae: 0.1471 - lr: 1.0000e-04 - 382ms/epoch - 7ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.03841
58/58 - 0s - loss: 0.0075 - mse: 0.0075 - mae: 0.0675 - val_loss: 0.0387 - val_mse: 0.0387 - val_mae: 0.1464 - lr: 1.0000e-04 - 394ms/epoch - 7ms/step
Epoch 15/500

Epoch 00015: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00015: val_loss did not improve from 0.03841
58/58 - 0s - loss: 0.0073 - mse: 0.0073 - mae: 0.0672 - val_loss: 0.0390 - val_mse: 0.0390 - val_mae: 0.1470 - lr: 1.0000e-04 - 376ms/epoch - 6ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.03841
58/58 - 0s - loss: 0.0068 - mse: 0.0068 - mae: 0.0645 - val_loss: 0.0388 - val_mse: 0.0388 - val_mae: 0.1464 - lr: 1.0000e-05 - 375ms/epoch - 6ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.03841
58/58 - 0s - loss: 0.0066 - mse: 0.0066 - mae: 0.0634 - val_loss: 0.0386 - val_mse: 0.0386 - val_mae: 0.1460 - lr: 1.0000e-05 - 383ms/epoch - 7ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.03841
58/58 - 0s - loss: 0.0067 - mse: 0.0067 - mae: 0.0653 - val_loss: 0.0384 - val_mse: 0.0384 - val_mae: 0.1456 - lr: 1.0000e-05 - 400ms/epoch - 7ms/step
Epoch 19/500

Epoch 00019: val_loss improved from 0.03841 to 0.03829, saving model to LSTM3.h5
58/58 - 0s - loss: 0.0068 - mse: 0.0068 - mae: 0.0658 - val_loss: 0.0383 - val_mse: 0.0383 - val_mae: 0.1452 - lr: 1.0000e-05 - 397ms/epoch - 7ms/step
Epoch 20/500

Epoch 00020: val_loss improved from 0.03829 to 0.03817, saving model to LSTM3.h5
58/58 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0649 - val_loss: 0.0382 - val_mse: 0.0382 - val_mae: 0.1449 - lr: 1.0000e-05 - 472ms/epoch - 8ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.03817
58/58 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0610 - val_loss: 0.0382 - val_mse: 0.0382 - val_mae: 0.1449 - lr: 1.0000e-05 - 382ms/epoch - 7ms/step
Epoch 22/500

Epoch 00022: val_loss improved from 0.03817 to 0.03805, saving model to LSTM3.h5
58/58 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0629 - val_loss: 0.0381 - val_mse: 0.0381 - val_mae: 0.1446 - lr: 1.0000e-05 - 397ms/epoch - 7ms/step
Epoch 23/500

Epoch 00023: val_loss improved from 0.03805 to 0.03794, saving model to LSTM3.h5
58/58 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0642 - val_loss: 0.0379 - val_mse: 0.0379 - val_mae: 0.1443 - lr: 1.0000e-05 - 412ms/epoch - 7ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.03794
58/58 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0640 - val_loss: 0.0380 - val_mse: 0.0380 - val_mae: 0.1446 - lr: 1.0000e-05 - 391ms/epoch - 7ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.03794
58/58 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0619 - val_loss: 0.0382 - val_mse: 0.0382 - val_mae: 0.1450 - lr: 1.0000e-05 - 392ms/epoch - 7ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.03794
58/58 - 0s - loss: 0.0066 - mse: 0.0066 - mae: 0.0641 - val_loss: 0.0382 - val_mse: 0.0382 - val_mae: 0.1451 - lr: 1.0000e-05 - 389ms/epoch - 7ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.03794
58/58 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0628 - val_loss: 0.0383 - val_mse: 0.0383 - val_mae: 0.1452 - lr: 1.0000e-05 - 380ms/epoch - 7ms/step
Epoch 28/500

Epoch 00028: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00028: val_loss did not improve from 0.03794
58/58 - 0s - loss: 0.0066 - mse: 0.0066 - mae: 0.0649 - val_loss: 0.0384 - val_mse: 0.0384 - val_mae: 0.1454 - lr: 1.0000e-05 - 390ms/epoch - 7ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.03794
58/58 - 0s - loss: 0.0069 - mse: 0.0069 - mae: 0.0661 - val_loss: 0.0382 - val_mse: 0.0382 - val_mae: 0.1451 - lr: 1.0000e-05 - 393ms/epoch - 7ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.03794
58/58 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0635 - val_loss: 0.0382 - val_mse: 0.0382 - val_mae: 0.1449 - lr: 1.0000e-05 - 459ms/epoch - 8ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.03794
58/58 - 0s - loss: 0.0067 - mse: 0.0067 - mae: 0.0653 - val_loss: 0.0380 - val_mse: 0.0380 - val_mae: 0.1444 - lr: 1.0000e-05 - 453ms/epoch - 8ms/step
Epoch 32/500

Epoch 00032: val_loss improved from 0.03794 to 0.03787, saving model to LSTM3.h5
58/58 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0611 - val_loss: 0.0379 - val_mse: 0.0379 - val_mae: 0.1442 - lr: 1.0000e-05 - 402ms/epoch - 7ms/step
Epoch 33/500

Epoch 00033: val_loss improved from 0.03787 to 0.03773, saving model to LSTM3.h5
58/58 - 0s - loss: 0.0069 - mse: 0.0069 - mae: 0.0655 - val_loss: 0.0377 - val_mse: 0.0377 - val_mae: 0.1438 - lr: 1.0000e-05 - 394ms/epoch - 7ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.03773
58/58 - 0s - loss: 0.0066 - mse: 0.0066 - mae: 0.0641 - val_loss: 0.0378 - val_mse: 0.0378 - val_mae: 0.1441 - lr: 1.0000e-05 - 380ms/epoch - 7ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.03773
58/58 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0607 - val_loss: 0.0378 - val_mse: 0.0378 - val_mae: 0.1439 - lr: 1.0000e-05 - 368ms/epoch - 6ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.03773
58/58 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0622 - val_loss: 0.0380 - val_mse: 0.0380 - val_mae: 0.1444 - lr: 1.0000e-05 - 382ms/epoch - 7ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.03773
58/58 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0618 - val_loss: 0.0383 - val_mse: 0.0383 - val_mae: 0.1452 - lr: 1.0000e-05 - 403ms/epoch - 7ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.03773
58/58 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0617 - val_loss: 0.0382 - val_mse: 0.0382 - val_mae: 0.1449 - lr: 1.0000e-05 - 377ms/epoch - 6ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.03773
58/58 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0618 - val_loss: 0.0382 - val_mse: 0.0382 - val_mae: 0.1451 - lr: 1.0000e-05 - 375ms/epoch - 6ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.03773
58/58 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0599 - val_loss: 0.0384 - val_mse: 0.0384 - val_mae: 0.1456 - lr: 1.0000e-05 - 456ms/epoch - 8ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.03773
58/58 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0615 - val_loss: 0.0385 - val_mse: 0.0385 - val_mae: 0.1458 - lr: 1.0000e-05 - 382ms/epoch - 7ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.03773
58/58 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0616 - val_loss: 0.0388 - val_mse: 0.0388 - val_mae: 0.1465 - lr: 1.0000e-05 - 382ms/epoch - 7ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.03773
58/58 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0609 - val_loss: 0.0387 - val_mse: 0.0387 - val_mae: 0.1464 - lr: 1.0000e-05 - 400ms/epoch - 7ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.03773
58/58 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0620 - val_loss: 0.0390 - val_mse: 0.0390 - val_mae: 0.1469 - lr: 1.0000e-05 - 378ms/epoch - 7ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.03773
58/58 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0633 - val_loss: 0.0388 - val_mse: 0.0388 - val_mae: 0.1467 - lr: 1.0000e-05 - 379ms/epoch - 7ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.03773
58/58 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0597 - val_loss: 0.0391 - val_mse: 0.0391 - val_mae: 0.1474 - lr: 1.0000e-05 - 374ms/epoch - 6ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.03773
58/58 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0624 - val_loss: 0.0393 - val_mse: 0.0393 - val_mae: 0.1477 - lr: 1.0000e-05 - 386ms/epoch - 7ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.03773
58/58 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0610 - val_loss: 0.0393 - val_mse: 0.0393 - val_mae: 0.1478 - lr: 1.0000e-05 - 457ms/epoch - 8ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.03773
58/58 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0583 - val_loss: 0.0394 - val_mse: 0.0394 - val_mae: 0.1481 - lr: 1.0000e-05 - 385ms/epoch - 7ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.03773
58/58 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0621 - val_loss: 0.0393 - val_mse: 0.0393 - val_mae: 0.1479 - lr: 1.0000e-05 - 459ms/epoch - 8ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.03773
58/58 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0618 - val_loss: 0.0392 - val_mse: 0.0392 - val_mae: 0.1477 - lr: 1.0000e-05 - 379ms/epoch - 7ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.03773
58/58 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0618 - val_loss: 0.0392 - val_mse: 0.0392 - val_mae: 0.1476 - lr: 1.0000e-05 - 380ms/epoch - 7ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.03773
58/58 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0621 - val_loss: 0.0391 - val_mse: 0.0391 - val_mae: 0.1473 - lr: 1.0000e-05 - 392ms/epoch - 7ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.03773
58/58 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0606 - val_loss: 0.0392 - val_mse: 0.0392 - val_mae: 0.1477 - lr: 1.0000e-05 - 382ms/epoch - 7ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.03773
58/58 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0589 - val_loss: 0.0393 - val_mse: 0.0393 - val_mae: 0.1479 - lr: 1.0000e-05 - 368ms/epoch - 6ms/step
Epoch 56/500

Epoch 00056: val_loss did not improve from 0.03773
58/58 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0611 - val_loss: 0.0394 - val_mse: 0.0394 - val_mae: 0.1480 - lr: 1.0000e-05 - 395ms/epoch - 7ms/step
Epoch 57/500

Epoch 00057: val_loss did not improve from 0.03773
58/58 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0599 - val_loss: 0.0394 - val_mse: 0.0394 - val_mae: 0.1481 - lr: 1.0000e-05 - 372ms/epoch - 6ms/step
Epoch 58/500

Epoch 00058: val_loss did not improve from 0.03773
58/58 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0596 - val_loss: 0.0393 - val_mse: 0.0393 - val_mae: 0.1479 - lr: 1.0000e-05 - 379ms/epoch - 7ms/step
Epoch 59/500

Epoch 00059: val_loss did not improve from 0.03773
58/58 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0593 - val_loss: 0.0395 - val_mse: 0.0395 - val_mae: 0.1483 - lr: 1.0000e-05 - 381ms/epoch - 7ms/step
Epoch 60/500

Epoch 00060: val_loss did not improve from 0.03773
58/58 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0584 - val_loss: 0.0399 - val_mse: 0.0399 - val_mae: 0.1492 - lr: 1.0000e-05 - 374ms/epoch - 6ms/step
Epoch 61/500

Epoch 00061: val_loss did not improve from 0.03773
58/58 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0607 - val_loss: 0.0400 - val_mse: 0.0400 - val_mae: 0.1496 - lr: 1.0000e-05 - 394ms/epoch - 7ms/step
Epoch 62/500

Epoch 00062: val_loss did not improve from 0.03773
58/58 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0606 - val_loss: 0.0400 - val_mse: 0.0400 - val_mae: 0.1495 - lr: 1.0000e-05 - 377ms/epoch - 7ms/step
Epoch 63/500

Epoch 00063: val_loss did not improve from 0.03773
58/58 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0603 - val_loss: 0.0401 - val_mse: 0.0401 - val_mae: 0.1499 - lr: 1.0000e-05 - 382ms/epoch - 7ms/step
Epoch 64/500

Epoch 00064: val_loss did not improve from 0.03773
58/58 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0593 - val_loss: 0.0404 - val_mse: 0.0404 - val_mae: 0.1505 - lr: 1.0000e-05 - 376ms/epoch - 6ms/step
Epoch 65/500

Epoch 00065: val_loss did not improve from 0.03773
58/58 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0633 - val_loss: 0.0403 - val_mse: 0.0403 - val_mae: 0.1503 - lr: 1.0000e-05 - 384ms/epoch - 7ms/step
Epoch 66/500

Epoch 00066: val_loss did not improve from 0.03773
58/58 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0609 - val_loss: 0.0403 - val_mse: 0.0403 - val_mae: 0.1503 - lr: 1.0000e-05 - 394ms/epoch - 7ms/step
Epoch 67/500

Epoch 00067: val_loss did not improve from 0.03773
58/58 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0591 - val_loss: 0.0404 - val_mse: 0.0404 - val_mae: 0.1506 - lr: 1.0000e-05 - 366ms/epoch - 6ms/step
Epoch 68/500

Epoch 00068: val_loss did not improve from 0.03773
58/58 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0597 - val_loss: 0.0407 - val_mse: 0.0407 - val_mae: 0.1514 - lr: 1.0000e-05 - 376ms/epoch - 6ms/step
Epoch 69/500

Epoch 00069: val_loss did not improve from 0.03773
58/58 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0562 - val_loss: 0.0409 - val_mse: 0.0409 - val_mae: 0.1517 - lr: 1.0000e-05 - 372ms/epoch - 6ms/step
Epoch 70/500

Epoch 00070: val_loss did not improve from 0.03773
58/58 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0595 - val_loss: 0.0409 - val_mse: 0.0409 - val_mae: 0.1519 - lr: 1.0000e-05 - 367ms/epoch - 6ms/step
Epoch 71/500

Epoch 00071: val_loss did not improve from 0.03773
58/58 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0588 - val_loss: 0.0410 - val_mse: 0.0410 - val_mae: 0.1522 - lr: 1.0000e-05 - 390ms/epoch - 7ms/step
Epoch 72/500

Epoch 00072: val_loss did not improve from 0.03773
58/58 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0610 - val_loss: 0.0414 - val_mse: 0.0414 - val_mae: 0.1529 - lr: 1.0000e-05 - 379ms/epoch - 7ms/step
Epoch 73/500

Epoch 00073: val_loss did not improve from 0.03773
58/58 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0614 - val_loss: 0.0414 - val_mse: 0.0414 - val_mae: 0.1531 - lr: 1.0000e-05 - 380ms/epoch - 7ms/step
Epoch 74/500

Epoch 00074: val_loss did not improve from 0.03773
58/58 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0568 - val_loss: 0.0414 - val_mse: 0.0414 - val_mae: 0.1531 - lr: 1.0000e-05 - 399ms/epoch - 7ms/step
Epoch 75/500

Epoch 00075: val_loss did not improve from 0.03773
58/58 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0593 - val_loss: 0.0415 - val_mse: 0.0415 - val_mae: 0.1534 - lr: 1.0000e-05 - 369ms/epoch - 6ms/step
Epoch 76/500

Epoch 00076: val_loss did not improve from 0.03773
58/58 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0583 - val_loss: 0.0417 - val_mse: 0.0417 - val_mae: 0.1538 - lr: 1.0000e-05 - 393ms/epoch - 7ms/step
Epoch 77/500

Epoch 00077: val_loss did not improve from 0.03773
58/58 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0576 - val_loss: 0.0420 - val_mse: 0.0420 - val_mae: 0.1544 - lr: 1.0000e-05 - 381ms/epoch - 7ms/step
Epoch 78/500

Epoch 00078: val_loss did not improve from 0.03773
58/58 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0585 - val_loss: 0.0419 - val_mse: 0.0419 - val_mae: 0.1543 - lr: 1.0000e-05 - 361ms/epoch - 6ms/step
Epoch 79/500

Epoch 00079: val_loss did not improve from 0.03773
58/58 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0588 - val_loss: 0.0420 - val_mse: 0.0420 - val_mae: 0.1545 - lr: 1.0000e-05 - 385ms/epoch - 7ms/step
Epoch 80/500

Epoch 00080: val_loss did not improve from 0.03773
58/58 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0581 - val_loss: 0.0420 - val_mse: 0.0420 - val_mae: 0.1544 - lr: 1.0000e-05 - 367ms/epoch - 6ms/step
Epoch 81/500

Epoch 00081: val_loss did not improve from 0.03773
58/58 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0606 - val_loss: 0.0421 - val_mse: 0.0421 - val_mae: 0.1547 - lr: 1.0000e-05 - 390ms/epoch - 7ms/step
Epoch 82/500

Epoch 00082: val_loss did not improve from 0.03773
58/58 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0569 - val_loss: 0.0427 - val_mse: 0.0427 - val_mae: 0.1561 - lr: 1.0000e-05 - 374ms/epoch - 6ms/step
Epoch 83/500

Epoch 00083: val_loss did not improve from 0.03773
58/58 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0584 - val_loss: 0.0429 - val_mse: 0.0429 - val_mae: 0.1566 - lr: 1.0000e-05 - 381ms/epoch - 7ms/step
Epoch 00083: early stopping
SMA
Prediction vs Close:		52.61% Accuracy
Prediction vs Prediction:	50.75% Accuracy
MSE:	 60.72793966434877 
RMSE:	 7.792813334370892 
MAPE:	 6.245110563496496

EMA
Prediction vs Close:		53.36% Accuracy
Prediction vs Prediction:	49.63% Accuracy
MSE:	 24.31424046870049 
RMSE:	 4.930947218202654 
MAPE:	 4.072073008160127

WMA
Prediction vs Close:		53.36% Accuracy
Prediction vs Prediction:	46.27% Accuracy
MSE:	 68.15170187847146 
RMSE:	 8.255404404296101 
MAPE:	 6.806281257852447

DEMA
Prediction vs Close:		51.49% Accuracy
Prediction vs Prediction:	48.51% Accuracy
MSE:	 43.922177447330554 
RMSE:	 6.627380888958364 
MAPE:	 5.414540694927783

KAMA
Prediction vs Close:		56.34% Accuracy
Prediction vs Prediction:	49.25% Accuracy
MSE:	 23.99643218488633 
RMSE:	 4.8986153334270215 
MAPE:	 3.8674202618000764

MIDPOINT
Prediction vs Close:		50.37% Accuracy
Prediction vs Prediction:	48.88% Accuracy
MSE:	 28.7191654730941 
RMSE:	 5.359026541555295 
MAPE:	 4.42030651732032
T3
T3([input_arrays], [timeperiod=5], [vfactor=0.7])

Triple Exponential Moving Average (T3) (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 5
    vfactor: 0.7
Outputs:
    real
19

Working on T3 predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.51 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4414.515, Time=0.04 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3944.062, Time=0.06 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.51 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3715.173, Time=0.10 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3577.471, Time=0.12 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=1.83 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=0.77 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3579.471, Time=0.25 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 4.208 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1784.736
Date:                Sun, 12 Dec 2021   AIC                           3577.471
Time:                        12:52:18   BIC                           3596.235
Sample:                             0   HQIC                          3584.677
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1982      0.003   -389.844      0.000      -1.204      -1.192
ar.L2         -0.8974      0.006   -139.861      0.000      -0.910      -0.885
ar.L3         -0.3983      0.006    -68.862      0.000      -0.410      -0.387
sigma2         4.9242      0.023    215.469      0.000       4.879       4.969
===================================================================================
Ljung-Box (L1) (Q):                  14.55   Jarque-Bera (JB):           2468024.38
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                             3.90
Prob(H) (two-sided):                  0.00   Kurtosis:                       274.15
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.44946, saving model to LSTM3.h5
43/43 - 3s - loss: 0.1358 - mse: 0.1358 - mae: 0.2724 - val_loss: 0.4495 - val_mse: 0.4495 - val_mae: 0.6183 - lr: 0.0010 - 3s/epoch - 77ms/step
Epoch 2/500

Epoch 00002: val_loss improved from 0.44946 to 0.33516, saving model to LSTM3.h5
43/43 - 0s - loss: 0.0321 - mse: 0.0321 - mae: 0.1409 - val_loss: 0.3352 - val_mse: 0.3352 - val_mae: 0.5241 - lr: 0.0010 - 340ms/epoch - 8ms/step
Epoch 3/500

Epoch 00003: val_loss improved from 0.33516 to 0.31563, saving model to LSTM3.h5
43/43 - 0s - loss: 0.0187 - mse: 0.0187 - mae: 0.1080 - val_loss: 0.3156 - val_mse: 0.3156 - val_mae: 0.5088 - lr: 0.0010 - 316ms/epoch - 7ms/step
Epoch 4/500

Epoch 00004: val_loss improved from 0.31563 to 0.28607, saving model to LSTM3.h5
43/43 - 0s - loss: 0.0172 - mse: 0.0172 - mae: 0.1047 - val_loss: 0.2861 - val_mse: 0.2861 - val_mae: 0.4844 - lr: 0.0010 - 318ms/epoch - 7ms/step
Epoch 5/500

Epoch 00005: val_loss improved from 0.28607 to 0.28490, saving model to LSTM3.h5
43/43 - 0s - loss: 0.0140 - mse: 0.0140 - mae: 0.0930 - val_loss: 0.2849 - val_mse: 0.2849 - val_mae: 0.4851 - lr: 0.0010 - 326ms/epoch - 8ms/step
Epoch 6/500

Epoch 00006: val_loss improved from 0.28490 to 0.26082, saving model to LSTM3.h5
43/43 - 0s - loss: 0.0154 - mse: 0.0154 - mae: 0.0983 - val_loss: 0.2608 - val_mse: 0.2608 - val_mae: 0.4637 - lr: 0.0010 - 323ms/epoch - 8ms/step
Epoch 7/500

Epoch 00007: val_loss improved from 0.26082 to 0.21860, saving model to LSTM3.h5
43/43 - 0s - loss: 0.0143 - mse: 0.0143 - mae: 0.0931 - val_loss: 0.2186 - val_mse: 0.2186 - val_mae: 0.4217 - lr: 0.0010 - 325ms/epoch - 8ms/step
Epoch 8/500

Epoch 00008: val_loss improved from 0.21860 to 0.21581, saving model to LSTM3.h5
43/43 - 0s - loss: 0.0137 - mse: 0.0137 - mae: 0.0916 - val_loss: 0.2158 - val_mse: 0.2158 - val_mae: 0.4206 - lr: 0.0010 - 344ms/epoch - 8ms/step
Epoch 9/500

Epoch 00009: val_loss improved from 0.21581 to 0.18839, saving model to LSTM3.h5
43/43 - 0s - loss: 0.0141 - mse: 0.0141 - mae: 0.0930 - val_loss: 0.1884 - val_mse: 0.1884 - val_mae: 0.3913 - lr: 0.0010 - 321ms/epoch - 7ms/step
Epoch 10/500

Epoch 00010: val_loss improved from 0.18839 to 0.17354, saving model to LSTM3.h5
43/43 - 0s - loss: 0.0141 - mse: 0.0141 - mae: 0.0923 - val_loss: 0.1735 - val_mse: 0.1735 - val_mae: 0.3747 - lr: 0.0010 - 319ms/epoch - 7ms/step
Epoch 11/500

Epoch 00011: val_loss improved from 0.17354 to 0.14548, saving model to LSTM3.h5
43/43 - 0s - loss: 0.0163 - mse: 0.0163 - mae: 0.1007 - val_loss: 0.1455 - val_mse: 0.1455 - val_mae: 0.3394 - lr: 0.0010 - 331ms/epoch - 8ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.14548
43/43 - 0s - loss: 0.0140 - mse: 0.0140 - mae: 0.0915 - val_loss: 0.1469 - val_mse: 0.1469 - val_mae: 0.3429 - lr: 0.0010 - 302ms/epoch - 7ms/step
Epoch 13/500

Epoch 00013: val_loss improved from 0.14548 to 0.12886, saving model to LSTM3.h5
43/43 - 0s - loss: 0.0158 - mse: 0.0158 - mae: 0.0985 - val_loss: 0.1289 - val_mse: 0.1289 - val_mae: 0.3190 - lr: 0.0010 - 335ms/epoch - 8ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.12886
43/43 - 0s - loss: 0.0170 - mse: 0.0170 - mae: 0.1029 - val_loss: 0.1327 - val_mse: 0.1327 - val_mae: 0.3271 - lr: 0.0010 - 301ms/epoch - 7ms/step
Epoch 15/500

Epoch 00015: val_loss improved from 0.12886 to 0.11198, saving model to LSTM3.h5
43/43 - 0s - loss: 0.0181 - mse: 0.0181 - mae: 0.1063 - val_loss: 0.1120 - val_mse: 0.1120 - val_mae: 0.2975 - lr: 0.0010 - 317ms/epoch - 7ms/step
Epoch 16/500

Epoch 00016: val_loss improved from 0.11198 to 0.10921, saving model to LSTM3.h5
43/43 - 0s - loss: 0.0172 - mse: 0.0172 - mae: 0.1056 - val_loss: 0.1092 - val_mse: 0.1092 - val_mae: 0.2938 - lr: 0.0010 - 332ms/epoch - 8ms/step
Epoch 17/500

Epoch 00017: val_loss improved from 0.10921 to 0.09012, saving model to LSTM3.h5
43/43 - 0s - loss: 0.0171 - mse: 0.0171 - mae: 0.1050 - val_loss: 0.0901 - val_mse: 0.0901 - val_mae: 0.2618 - lr: 0.0010 - 340ms/epoch - 8ms/step
Epoch 18/500

Epoch 00018: val_loss improved from 0.09012 to 0.08075, saving model to LSTM3.h5
43/43 - 0s - loss: 0.0160 - mse: 0.0160 - mae: 0.1023 - val_loss: 0.0807 - val_mse: 0.0807 - val_mae: 0.2465 - lr: 0.0010 - 336ms/epoch - 8ms/step
Epoch 19/500

Epoch 00019: val_loss improved from 0.08075 to 0.07533, saving model to LSTM3.h5
43/43 - 0s - loss: 0.0158 - mse: 0.0158 - mae: 0.1039 - val_loss: 0.0753 - val_mse: 0.0753 - val_mae: 0.2365 - lr: 0.0010 - 324ms/epoch - 8ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.07533
43/43 - 0s - loss: 0.0135 - mse: 0.0135 - mae: 0.0955 - val_loss: 0.0876 - val_mse: 0.0876 - val_mae: 0.2597 - lr: 0.0010 - 312ms/epoch - 7ms/step
Epoch 21/500

Epoch 00021: val_loss improved from 0.07533 to 0.06043, saving model to LSTM3.h5
43/43 - 0s - loss: 0.0119 - mse: 0.0119 - mae: 0.0872 - val_loss: 0.0604 - val_mse: 0.0604 - val_mae: 0.2063 - lr: 0.0010 - 319ms/epoch - 7ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.06043
43/43 - 0s - loss: 0.0103 - mse: 0.0103 - mae: 0.0824 - val_loss: 0.0832 - val_mse: 0.0832 - val_mae: 0.2516 - lr: 0.0010 - 300ms/epoch - 7ms/step
Epoch 23/500

Epoch 00023: val_loss improved from 0.06043 to 0.05807, saving model to LSTM3.h5
43/43 - 0s - loss: 0.0087 - mse: 0.0087 - mae: 0.0768 - val_loss: 0.0581 - val_mse: 0.0581 - val_mae: 0.2016 - lr: 0.0010 - 344ms/epoch - 8ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.05807
43/43 - 0s - loss: 0.0087 - mse: 0.0087 - mae: 0.0741 - val_loss: 0.0809 - val_mse: 0.0809 - val_mae: 0.2482 - lr: 0.0010 - 300ms/epoch - 7ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.05807
43/43 - 0s - loss: 0.0081 - mse: 0.0081 - mae: 0.0739 - val_loss: 0.0599 - val_mse: 0.0599 - val_mae: 0.2066 - lr: 0.0010 - 295ms/epoch - 7ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.05807
43/43 - 0s - loss: 0.0085 - mse: 0.0085 - mae: 0.0740 - val_loss: 0.0976 - val_mse: 0.0976 - val_mae: 0.2782 - lr: 0.0010 - 289ms/epoch - 7ms/step
Epoch 27/500

Epoch 00027: val_loss improved from 0.05807 to 0.04480, saving model to LSTM3.h5
43/43 - 0s - loss: 0.0069 - mse: 0.0069 - mae: 0.0670 - val_loss: 0.0448 - val_mse: 0.0448 - val_mae: 0.1721 - lr: 0.0010 - 344ms/epoch - 8ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.04480
43/43 - 0s - loss: 0.0072 - mse: 0.0072 - mae: 0.0676 - val_loss: 0.0764 - val_mse: 0.0764 - val_mae: 0.2408 - lr: 0.0010 - 312ms/epoch - 7ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.04480
43/43 - 0s - loss: 0.0067 - mse: 0.0067 - mae: 0.0654 - val_loss: 0.0476 - val_mse: 0.0476 - val_mae: 0.1803 - lr: 0.0010 - 302ms/epoch - 7ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.04480
43/43 - 0s - loss: 0.0076 - mse: 0.0076 - mae: 0.0694 - val_loss: 0.0789 - val_mse: 0.0789 - val_mae: 0.2465 - lr: 0.0010 - 324ms/epoch - 8ms/step
Epoch 31/500

Epoch 00031: val_loss improved from 0.04480 to 0.03793, saving model to LSTM3.h5
43/43 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0646 - val_loss: 0.0379 - val_mse: 0.0379 - val_mae: 0.1565 - lr: 0.0010 - 342ms/epoch - 8ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.03793
43/43 - 0s - loss: 0.0075 - mse: 0.0075 - mae: 0.0688 - val_loss: 0.0968 - val_mse: 0.0968 - val_mae: 0.2792 - lr: 0.0010 - 302ms/epoch - 7ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.03793
43/43 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0638 - val_loss: 0.0428 - val_mse: 0.0428 - val_mae: 0.1703 - lr: 0.0010 - 323ms/epoch - 8ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.03793
43/43 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0637 - val_loss: 0.0938 - val_mse: 0.0938 - val_mae: 0.2747 - lr: 0.0010 - 307ms/epoch - 7ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.03793
43/43 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0640 - val_loss: 0.0404 - val_mse: 0.0404 - val_mae: 0.1647 - lr: 0.0010 - 296ms/epoch - 7ms/step
Epoch 36/500

Epoch 00036: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00036: val_loss did not improve from 0.03793
43/43 - 0s - loss: 0.0076 - mse: 0.0076 - mae: 0.0661 - val_loss: 0.1039 - val_mse: 0.1039 - val_mae: 0.2926 - lr: 0.0010 - 325ms/epoch - 8ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.03793
43/43 - 0s - loss: 0.0095 - mse: 0.0095 - mae: 0.0783 - val_loss: 0.0696 - val_mse: 0.0696 - val_mae: 0.2319 - lr: 1.0000e-04 - 300ms/epoch - 7ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.03793
43/43 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0538 - val_loss: 0.0613 - val_mse: 0.0613 - val_mae: 0.2146 - lr: 1.0000e-04 - 293ms/epoch - 7ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.03793
43/43 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0521 - val_loss: 0.0593 - val_mse: 0.0593 - val_mae: 0.2103 - lr: 1.0000e-04 - 306ms/epoch - 7ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.03793
43/43 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0502 - val_loss: 0.0585 - val_mse: 0.0585 - val_mae: 0.2084 - lr: 1.0000e-04 - 306ms/epoch - 7ms/step
Epoch 41/500

Epoch 00041: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00041: val_loss did not improve from 0.03793
43/43 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0497 - val_loss: 0.0577 - val_mse: 0.0577 - val_mae: 0.2067 - lr: 1.0000e-04 - 311ms/epoch - 7ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.03793
43/43 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0505 - val_loss: 0.0573 - val_mse: 0.0573 - val_mae: 0.2058 - lr: 1.0000e-05 - 291ms/epoch - 7ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.03793
43/43 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0503 - val_loss: 0.0571 - val_mse: 0.0571 - val_mae: 0.2053 - lr: 1.0000e-05 - 316ms/epoch - 7ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.03793
43/43 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0500 - val_loss: 0.0571 - val_mse: 0.0571 - val_mae: 0.2053 - lr: 1.0000e-05 - 321ms/epoch - 7ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.03793
43/43 - 0s - loss: 0.0037 - mse: 0.0037 - mae: 0.0485 - val_loss: 0.0569 - val_mse: 0.0569 - val_mae: 0.2048 - lr: 1.0000e-05 - 303ms/epoch - 7ms/step
Epoch 46/500

Epoch 00046: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00046: val_loss did not improve from 0.03793
43/43 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0476 - val_loss: 0.0568 - val_mse: 0.0568 - val_mae: 0.2046 - lr: 1.0000e-05 - 317ms/epoch - 7ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.03793
43/43 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0481 - val_loss: 0.0569 - val_mse: 0.0569 - val_mae: 0.2048 - lr: 1.0000e-05 - 337ms/epoch - 8ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.03793
43/43 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0468 - val_loss: 0.0567 - val_mse: 0.0567 - val_mae: 0.2045 - lr: 1.0000e-05 - 303ms/epoch - 7ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.03793
43/43 - 0s - loss: 0.0037 - mse: 0.0037 - mae: 0.0483 - val_loss: 0.0566 - val_mse: 0.0566 - val_mae: 0.2042 - lr: 1.0000e-05 - 306ms/epoch - 7ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.03793
43/43 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0495 - val_loss: 0.0565 - val_mse: 0.0565 - val_mae: 0.2039 - lr: 1.0000e-05 - 313ms/epoch - 7ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.03793
43/43 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0479 - val_loss: 0.0563 - val_mse: 0.0563 - val_mae: 0.2034 - lr: 1.0000e-05 - 292ms/epoch - 7ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.03793
43/43 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0472 - val_loss: 0.0564 - val_mse: 0.0564 - val_mae: 0.2036 - lr: 1.0000e-05 - 313ms/epoch - 7ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.03793
43/43 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0499 - val_loss: 0.0562 - val_mse: 0.0562 - val_mae: 0.2032 - lr: 1.0000e-05 - 289ms/epoch - 7ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.03793
43/43 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0489 - val_loss: 0.0561 - val_mse: 0.0561 - val_mae: 0.2030 - lr: 1.0000e-05 - 319ms/epoch - 7ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.03793
43/43 - 0s - loss: 0.0037 - mse: 0.0037 - mae: 0.0479 - val_loss: 0.0560 - val_mse: 0.0560 - val_mae: 0.2028 - lr: 1.0000e-05 - 319ms/epoch - 7ms/step
Epoch 56/500

Epoch 00056: val_loss did not improve from 0.03793
43/43 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0501 - val_loss: 0.0561 - val_mse: 0.0561 - val_mae: 0.2030 - lr: 1.0000e-05 - 316ms/epoch - 7ms/step
Epoch 57/500

Epoch 00057: val_loss did not improve from 0.03793
43/43 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0471 - val_loss: 0.0563 - val_mse: 0.0563 - val_mae: 0.2034 - lr: 1.0000e-05 - 312ms/epoch - 7ms/step
Epoch 58/500

Epoch 00058: val_loss did not improve from 0.03793
43/43 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0471 - val_loss: 0.0561 - val_mse: 0.0561 - val_mae: 0.2031 - lr: 1.0000e-05 - 302ms/epoch - 7ms/step
Epoch 59/500

Epoch 00059: val_loss did not improve from 0.03793
43/43 - 0s - loss: 0.0035 - mse: 0.0035 - mae: 0.0468 - val_loss: 0.0563 - val_mse: 0.0563 - val_mae: 0.2035 - lr: 1.0000e-05 - 303ms/epoch - 7ms/step
Epoch 60/500

Epoch 00060: val_loss did not improve from 0.03793
43/43 - 0s - loss: 0.0035 - mse: 0.0035 - mae: 0.0466 - val_loss: 0.0562 - val_mse: 0.0562 - val_mae: 0.2031 - lr: 1.0000e-05 - 307ms/epoch - 7ms/step
Epoch 61/500

Epoch 00061: val_loss did not improve from 0.03793
43/43 - 0s - loss: 0.0037 - mse: 0.0037 - mae: 0.0480 - val_loss: 0.0560 - val_mse: 0.0560 - val_mae: 0.2029 - lr: 1.0000e-05 - 305ms/epoch - 7ms/step
Epoch 62/500

Epoch 00062: val_loss did not improve from 0.03793
43/43 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0500 - val_loss: 0.0557 - val_mse: 0.0557 - val_mae: 0.2022 - lr: 1.0000e-05 - 325ms/epoch - 8ms/step
Epoch 63/500

Epoch 00063: val_loss did not improve from 0.03793
43/43 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0498 - val_loss: 0.0558 - val_mse: 0.0558 - val_mae: 0.2023 - lr: 1.0000e-05 - 296ms/epoch - 7ms/step
Epoch 64/500

Epoch 00064: val_loss did not improve from 0.03793
43/43 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0511 - val_loss: 0.0557 - val_mse: 0.0557 - val_mae: 0.2021 - lr: 1.0000e-05 - 294ms/epoch - 7ms/step
Epoch 65/500

Epoch 00065: val_loss did not improve from 0.03793
43/43 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0470 - val_loss: 0.0559 - val_mse: 0.0559 - val_mae: 0.2024 - lr: 1.0000e-05 - 330ms/epoch - 8ms/step
Epoch 66/500

Epoch 00066: val_loss did not improve from 0.03793
43/43 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0491 - val_loss: 0.0557 - val_mse: 0.0557 - val_mae: 0.2022 - lr: 1.0000e-05 - 294ms/epoch - 7ms/step
Epoch 67/500

Epoch 00067: val_loss did not improve from 0.03793
43/43 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0477 - val_loss: 0.0556 - val_mse: 0.0556 - val_mae: 0.2020 - lr: 1.0000e-05 - 287ms/epoch - 7ms/step
Epoch 68/500

Epoch 00068: val_loss did not improve from 0.03793
43/43 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0505 - val_loss: 0.0554 - val_mse: 0.0554 - val_mae: 0.2013 - lr: 1.0000e-05 - 322ms/epoch - 7ms/step
Epoch 69/500

Epoch 00069: val_loss did not improve from 0.03793
43/43 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0466 - val_loss: 0.0555 - val_mse: 0.0555 - val_mae: 0.2017 - lr: 1.0000e-05 - 294ms/epoch - 7ms/step
Epoch 70/500

Epoch 00070: val_loss did not improve from 0.03793
43/43 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0490 - val_loss: 0.0558 - val_mse: 0.0558 - val_mae: 0.2024 - lr: 1.0000e-05 - 290ms/epoch - 7ms/step
Epoch 71/500

Epoch 00071: val_loss did not improve from 0.03793
43/43 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0494 - val_loss: 0.0558 - val_mse: 0.0558 - val_mae: 0.2024 - lr: 1.0000e-05 - 308ms/epoch - 7ms/step
Epoch 72/500

Epoch 00072: val_loss did not improve from 0.03793
43/43 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0484 - val_loss: 0.0556 - val_mse: 0.0556 - val_mae: 0.2019 - lr: 1.0000e-05 - 327ms/epoch - 8ms/step
Epoch 73/500

Epoch 00073: val_loss did not improve from 0.03793
43/43 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0469 - val_loss: 0.0558 - val_mse: 0.0558 - val_mae: 0.2023 - lr: 1.0000e-05 - 293ms/epoch - 7ms/step
Epoch 74/500

Epoch 00074: val_loss did not improve from 0.03793
43/43 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0490 - val_loss: 0.0555 - val_mse: 0.0555 - val_mae: 0.2017 - lr: 1.0000e-05 - 303ms/epoch - 7ms/step
Epoch 75/500

Epoch 00075: val_loss did not improve from 0.03793
43/43 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0487 - val_loss: 0.0559 - val_mse: 0.0559 - val_mae: 0.2025 - lr: 1.0000e-05 - 312ms/epoch - 7ms/step
Epoch 76/500

Epoch 00076: val_loss did not improve from 0.03793
43/43 - 0s - loss: 0.0037 - mse: 0.0037 - mae: 0.0478 - val_loss: 0.0559 - val_mse: 0.0559 - val_mae: 0.2025 - lr: 1.0000e-05 - 293ms/epoch - 7ms/step
Epoch 77/500

Epoch 00077: val_loss did not improve from 0.03793
43/43 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0499 - val_loss: 0.0560 - val_mse: 0.0560 - val_mae: 0.2027 - lr: 1.0000e-05 - 280ms/epoch - 7ms/step
Epoch 78/500

Epoch 00078: val_loss did not improve from 0.03793
43/43 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0485 - val_loss: 0.0558 - val_mse: 0.0558 - val_mae: 0.2024 - lr: 1.0000e-05 - 335ms/epoch - 8ms/step
Epoch 79/500

Epoch 00079: val_loss did not improve from 0.03793
43/43 - 0s - loss: 0.0037 - mse: 0.0037 - mae: 0.0481 - val_loss: 0.0558 - val_mse: 0.0558 - val_mae: 0.2024 - lr: 1.0000e-05 - 294ms/epoch - 7ms/step
Epoch 80/500

Epoch 00080: val_loss did not improve from 0.03793
43/43 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0494 - val_loss: 0.0558 - val_mse: 0.0558 - val_mae: 0.2024 - lr: 1.0000e-05 - 304ms/epoch - 7ms/step
Epoch 81/500

Epoch 00081: val_loss did not improve from 0.03793
43/43 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0490 - val_loss: 0.0559 - val_mse: 0.0559 - val_mae: 0.2027 - lr: 1.0000e-05 - 322ms/epoch - 7ms/step
Epoch 00081: early stopping
SMA
Prediction vs Close:		52.61% Accuracy
Prediction vs Prediction:	50.75% Accuracy
MSE:	 60.72793966434877 
RMSE:	 7.792813334370892 
MAPE:	 6.245110563496496

EMA
Prediction vs Close:		53.36% Accuracy
Prediction vs Prediction:	49.63% Accuracy
MSE:	 24.31424046870049 
RMSE:	 4.930947218202654 
MAPE:	 4.072073008160127

WMA
Prediction vs Close:		53.36% Accuracy
Prediction vs Prediction:	46.27% Accuracy
MSE:	 68.15170187847146 
RMSE:	 8.255404404296101 
MAPE:	 6.806281257852447

DEMA
Prediction vs Close:		51.49% Accuracy
Prediction vs Prediction:	48.51% Accuracy
MSE:	 43.922177447330554 
RMSE:	 6.627380888958364 
MAPE:	 5.414540694927783

KAMA
Prediction vs Close:		56.34% Accuracy
Prediction vs Prediction:	49.25% Accuracy
MSE:	 23.99643218488633 
RMSE:	 4.8986153334270215 
MAPE:	 3.8674202618000764

MIDPOINT
Prediction vs Close:		50.37% Accuracy
Prediction vs Prediction:	48.88% Accuracy
MSE:	 28.7191654730941 
RMSE:	 5.359026541555295 
MAPE:	 4.42030651732032

T3
Prediction vs Close:		51.87% Accuracy
Prediction vs Prediction:	47.39% Accuracy
MSE:	 57.05690799124099 
RMSE:	 7.553602318843705 
MAPE:	 6.064989593585796
TEMA
TEMA([input_arrays], [timeperiod=30])

Triple Exponential Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
9

Working on TEMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.63 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4352.703, Time=0.04 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3889.412, Time=0.06 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.41 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3689.930, Time=0.08 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3574.245, Time=0.11 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=1.49 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=1.06 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3576.245, Time=0.28 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 4.162 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1783.123
Date:                Sun, 12 Dec 2021   AIC                           3574.245
Time:                        12:54:26   BIC                           3593.008
Sample:                             0   HQIC                          3581.451
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1480      0.004   -302.430      0.000      -1.155      -1.141
ar.L2         -0.8300      0.008    -99.682      0.000      -0.846      -0.814
ar.L3         -0.3687      0.007    -50.527      0.000      -0.383      -0.354
sigma2         4.9055      0.028    175.970      0.000       4.851       4.960
===================================================================================
Ljung-Box (L1) (Q):                  11.61   Jarque-Bera (JB):           1261976.58
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.16   Skew:                             2.52
Prob(H) (two-sided):                  0.00   Kurtosis:                       196.90
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.52336, saving model to LSTM3.h5
90/90 - 4s - loss: 0.1259 - mse: 0.1259 - mae: 0.2372 - val_loss: 0.5234 - val_mse: 0.5234 - val_mae: 0.6793 - lr: 0.0010 - 4s/epoch - 40ms/step
Epoch 2/500

Epoch 00002: val_loss improved from 0.52336 to 0.46424, saving model to LSTM3.h5
90/90 - 1s - loss: 0.0151 - mse: 0.0151 - mae: 0.0950 - val_loss: 0.4642 - val_mse: 0.4642 - val_mae: 0.6371 - lr: 0.0010 - 601ms/epoch - 7ms/step
Epoch 3/500

Epoch 00003: val_loss improved from 0.46424 to 0.39030, saving model to LSTM3.h5
90/90 - 1s - loss: 0.0104 - mse: 0.0104 - mae: 0.0811 - val_loss: 0.3903 - val_mse: 0.3903 - val_mae: 0.5815 - lr: 0.0010 - 592ms/epoch - 7ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.39030
90/90 - 1s - loss: 0.0096 - mse: 0.0096 - mae: 0.0776 - val_loss: 0.4059 - val_mse: 0.4059 - val_mae: 0.5945 - lr: 0.0010 - 582ms/epoch - 6ms/step
Epoch 5/500

Epoch 00005: val_loss improved from 0.39030 to 0.37908, saving model to LSTM3.h5
90/90 - 1s - loss: 0.0107 - mse: 0.0107 - mae: 0.0809 - val_loss: 0.3791 - val_mse: 0.3791 - val_mae: 0.5737 - lr: 0.0010 - 587ms/epoch - 7ms/step
Epoch 6/500

Epoch 00006: val_loss improved from 0.37908 to 0.34176, saving model to LSTM3.h5
90/90 - 1s - loss: 0.0137 - mse: 0.0137 - mae: 0.0906 - val_loss: 0.3418 - val_mse: 0.3418 - val_mae: 0.5434 - lr: 0.0010 - 593ms/epoch - 7ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.34176
90/90 - 1s - loss: 0.0126 - mse: 0.0126 - mae: 0.0863 - val_loss: 0.3475 - val_mse: 0.3475 - val_mae: 0.5496 - lr: 0.0010 - 590ms/epoch - 7ms/step
Epoch 8/500

Epoch 00008: val_loss improved from 0.34176 to 0.33384, saving model to LSTM3.h5
90/90 - 1s - loss: 0.0139 - mse: 0.0139 - mae: 0.0918 - val_loss: 0.3338 - val_mse: 0.3338 - val_mae: 0.5411 - lr: 0.0010 - 584ms/epoch - 6ms/step
Epoch 9/500

Epoch 00009: val_loss improved from 0.33384 to 0.27606, saving model to LSTM3.h5
90/90 - 1s - loss: 0.0103 - mse: 0.0103 - mae: 0.0797 - val_loss: 0.2761 - val_mse: 0.2761 - val_mae: 0.4891 - lr: 0.0010 - 605ms/epoch - 7ms/step
Epoch 10/500

Epoch 00010: val_loss improved from 0.27606 to 0.27021, saving model to LSTM3.h5
90/90 - 1s - loss: 0.0089 - mse: 0.0089 - mae: 0.0709 - val_loss: 0.2702 - val_mse: 0.2702 - val_mae: 0.4844 - lr: 0.0010 - 594ms/epoch - 7ms/step
Epoch 11/500

Epoch 00011: val_loss improved from 0.27021 to 0.21405, saving model to LSTM3.h5
90/90 - 1s - loss: 0.0061 - mse: 0.0061 - mae: 0.0610 - val_loss: 0.2141 - val_mse: 0.2141 - val_mae: 0.4274 - lr: 0.0010 - 638ms/epoch - 7ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.21405
90/90 - 1s - loss: 0.0064 - mse: 0.0064 - mae: 0.0629 - val_loss: 0.2688 - val_mse: 0.2688 - val_mae: 0.4840 - lr: 0.0010 - 617ms/epoch - 7ms/step
Epoch 13/500

Epoch 00013: val_loss improved from 0.21405 to 0.20815, saving model to LSTM3.h5
90/90 - 1s - loss: 0.0074 - mse: 0.0074 - mae: 0.0658 - val_loss: 0.2081 - val_mse: 0.2081 - val_mae: 0.4220 - lr: 0.0010 - 604ms/epoch - 7ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.20815
90/90 - 1s - loss: 0.0077 - mse: 0.0077 - mae: 0.0658 - val_loss: 0.2552 - val_mse: 0.2552 - val_mae: 0.4721 - lr: 0.0010 - 643ms/epoch - 7ms/step
Epoch 15/500

Epoch 00015: val_loss improved from 0.20815 to 0.17608, saving model to LSTM3.h5
90/90 - 1s - loss: 0.0070 - mse: 0.0070 - mae: 0.0624 - val_loss: 0.1761 - val_mse: 0.1761 - val_mae: 0.3862 - lr: 0.0010 - 658ms/epoch - 7ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.17608
90/90 - 1s - loss: 0.0076 - mse: 0.0076 - mae: 0.0665 - val_loss: 0.2636 - val_mse: 0.2636 - val_mae: 0.4820 - lr: 0.0010 - 640ms/epoch - 7ms/step
Epoch 17/500

Epoch 00017: val_loss improved from 0.17608 to 0.14463, saving model to LSTM3.h5
90/90 - 1s - loss: 0.0076 - mse: 0.0076 - mae: 0.0641 - val_loss: 0.1446 - val_mse: 0.1446 - val_mae: 0.3483 - lr: 0.0010 - 601ms/epoch - 7ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.14463
90/90 - 1s - loss: 0.0076 - mse: 0.0076 - mae: 0.0638 - val_loss: 0.2627 - val_mse: 0.2627 - val_mae: 0.4835 - lr: 0.0010 - 583ms/epoch - 6ms/step
Epoch 19/500

Epoch 00019: val_loss improved from 0.14463 to 0.10852, saving model to LSTM3.h5
90/90 - 1s - loss: 0.0067 - mse: 0.0067 - mae: 0.0593 - val_loss: 0.1085 - val_mse: 0.1085 - val_mae: 0.2977 - lr: 0.0010 - 656ms/epoch - 7ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.10852
90/90 - 1s - loss: 0.0069 - mse: 0.0069 - mae: 0.0608 - val_loss: 0.2559 - val_mse: 0.2559 - val_mae: 0.4772 - lr: 0.0010 - 598ms/epoch - 7ms/step
Epoch 21/500

Epoch 00021: val_loss improved from 0.10852 to 0.10372, saving model to LSTM3.h5
90/90 - 1s - loss: 0.0065 - mse: 0.0065 - mae: 0.0597 - val_loss: 0.1037 - val_mse: 0.1037 - val_mae: 0.2916 - lr: 0.0010 - 640ms/epoch - 7ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.10372
90/90 - 1s - loss: 0.0058 - mse: 0.0058 - mae: 0.0590 - val_loss: 0.2050 - val_mse: 0.2050 - val_mae: 0.4258 - lr: 0.0010 - 623ms/epoch - 7ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.10372
90/90 - 1s - loss: 0.0057 - mse: 0.0057 - mae: 0.0566 - val_loss: 0.1106 - val_mse: 0.1106 - val_mae: 0.3051 - lr: 0.0010 - 581ms/epoch - 6ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.10372
90/90 - 1s - loss: 0.0057 - mse: 0.0057 - mae: 0.0583 - val_loss: 0.1798 - val_mse: 0.1798 - val_mae: 0.3982 - lr: 0.0010 - 586ms/epoch - 7ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.10372
90/90 - 1s - loss: 0.0057 - mse: 0.0057 - mae: 0.0577 - val_loss: 0.1084 - val_mse: 0.1084 - val_mae: 0.3016 - lr: 0.0010 - 636ms/epoch - 7ms/step
Epoch 26/500

Epoch 00026: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00026: val_loss did not improve from 0.10372
90/90 - 1s - loss: 0.0060 - mse: 0.0060 - mae: 0.0609 - val_loss: 0.1685 - val_mse: 0.1685 - val_mae: 0.3851 - lr: 0.0010 - 577ms/epoch - 6ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.10372
90/90 - 1s - loss: 0.0107 - mse: 0.0107 - mae: 0.0824 - val_loss: 0.1128 - val_mse: 0.1128 - val_mae: 0.3118 - lr: 1.0000e-04 - 552ms/epoch - 6ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.10372
90/90 - 1s - loss: 0.0045 - mse: 0.0045 - mae: 0.0543 - val_loss: 0.1052 - val_mse: 0.1052 - val_mae: 0.3003 - lr: 1.0000e-04 - 625ms/epoch - 7ms/step
Epoch 29/500

Epoch 00029: val_loss improved from 0.10372 to 0.10207, saving model to LSTM3.h5
90/90 - 1s - loss: 0.0039 - mse: 0.0039 - mae: 0.0494 - val_loss: 0.1021 - val_mse: 0.1021 - val_mae: 0.2952 - lr: 1.0000e-04 - 608ms/epoch - 7ms/step
Epoch 30/500

Epoch 00030: val_loss improved from 0.10207 to 0.10053, saving model to LSTM3.h5
90/90 - 1s - loss: 0.0038 - mse: 0.0038 - mae: 0.0484 - val_loss: 0.1005 - val_mse: 0.1005 - val_mae: 0.2926 - lr: 1.0000e-04 - 575ms/epoch - 6ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.10053
90/90 - 1s - loss: 0.0038 - mse: 0.0038 - mae: 0.0502 - val_loss: 0.1011 - val_mse: 0.1011 - val_mae: 0.2933 - lr: 1.0000e-04 - 587ms/epoch - 7ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.10053
90/90 - 1s - loss: 0.0036 - mse: 0.0036 - mae: 0.0478 - val_loss: 0.1024 - val_mse: 0.1024 - val_mae: 0.2952 - lr: 1.0000e-04 - 574ms/epoch - 6ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.10053
90/90 - 1s - loss: 0.0039 - mse: 0.0039 - mae: 0.0480 - val_loss: 0.1036 - val_mse: 0.1036 - val_mae: 0.2969 - lr: 1.0000e-04 - 580ms/epoch - 6ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.10053
90/90 - 1s - loss: 0.0034 - mse: 0.0034 - mae: 0.0462 - val_loss: 0.1085 - val_mse: 0.1085 - val_mae: 0.3043 - lr: 1.0000e-04 - 578ms/epoch - 6ms/step
Epoch 35/500

Epoch 00035: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00035: val_loss did not improve from 0.10053
90/90 - 1s - loss: 0.0036 - mse: 0.0036 - mae: 0.0464 - val_loss: 0.1101 - val_mse: 0.1101 - val_mae: 0.3066 - lr: 1.0000e-04 - 577ms/epoch - 6ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.10053
90/90 - 1s - loss: 0.0035 - mse: 0.0035 - mae: 0.0458 - val_loss: 0.1084 - val_mse: 0.1084 - val_mae: 0.3041 - lr: 1.0000e-05 - 581ms/epoch - 6ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.10053
90/90 - 1s - loss: 0.0033 - mse: 0.0033 - mae: 0.0450 - val_loss: 0.1074 - val_mse: 0.1074 - val_mae: 0.3026 - lr: 1.0000e-05 - 633ms/epoch - 7ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.10053
90/90 - 1s - loss: 0.0032 - mse: 0.0032 - mae: 0.0449 - val_loss: 0.1066 - val_mse: 0.1066 - val_mae: 0.3013 - lr: 1.0000e-05 - 577ms/epoch - 6ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.10053
90/90 - 1s - loss: 0.0032 - mse: 0.0032 - mae: 0.0437 - val_loss: 0.1062 - val_mse: 0.1062 - val_mae: 0.3007 - lr: 1.0000e-05 - 590ms/epoch - 7ms/step
Epoch 40/500

Epoch 00040: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00040: val_loss did not improve from 0.10053
90/90 - 1s - loss: 0.0032 - mse: 0.0032 - mae: 0.0441 - val_loss: 0.1057 - val_mse: 0.1057 - val_mae: 0.2999 - lr: 1.0000e-05 - 647ms/epoch - 7ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.10053
90/90 - 1s - loss: 0.0033 - mse: 0.0033 - mae: 0.0450 - val_loss: 0.1059 - val_mse: 0.1059 - val_mae: 0.3002 - lr: 1.0000e-05 - 569ms/epoch - 6ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.10053
90/90 - 1s - loss: 0.0033 - mse: 0.0033 - mae: 0.0459 - val_loss: 0.1061 - val_mse: 0.1061 - val_mae: 0.3006 - lr: 1.0000e-05 - 592ms/epoch - 7ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.10053
90/90 - 1s - loss: 0.0030 - mse: 0.0030 - mae: 0.0436 - val_loss: 0.1061 - val_mse: 0.1061 - val_mae: 0.3005 - lr: 1.0000e-05 - 558ms/epoch - 6ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.10053
90/90 - 1s - loss: 0.0033 - mse: 0.0033 - mae: 0.0445 - val_loss: 0.1062 - val_mse: 0.1062 - val_mae: 0.3006 - lr: 1.0000e-05 - 597ms/epoch - 7ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.10053
90/90 - 1s - loss: 0.0033 - mse: 0.0033 - mae: 0.0440 - val_loss: 0.1061 - val_mse: 0.1061 - val_mae: 0.3004 - lr: 1.0000e-05 - 574ms/epoch - 6ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.10053
90/90 - 1s - loss: 0.0034 - mse: 0.0034 - mae: 0.0465 - val_loss: 0.1063 - val_mse: 0.1063 - val_mae: 0.3007 - lr: 1.0000e-05 - 588ms/epoch - 7ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.10053
90/90 - 1s - loss: 0.0033 - mse: 0.0033 - mae: 0.0451 - val_loss: 0.1065 - val_mse: 0.1065 - val_mae: 0.3011 - lr: 1.0000e-05 - 641ms/epoch - 7ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.10053
90/90 - 1s - loss: 0.0034 - mse: 0.0034 - mae: 0.0458 - val_loss: 0.1064 - val_mse: 0.1064 - val_mae: 0.3009 - lr: 1.0000e-05 - 629ms/epoch - 7ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.10053
90/90 - 1s - loss: 0.0032 - mse: 0.0032 - mae: 0.0444 - val_loss: 0.1060 - val_mse: 0.1060 - val_mae: 0.3002 - lr: 1.0000e-05 - 590ms/epoch - 7ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.10053
90/90 - 1s - loss: 0.0031 - mse: 0.0031 - mae: 0.0434 - val_loss: 0.1061 - val_mse: 0.1061 - val_mae: 0.3002 - lr: 1.0000e-05 - 584ms/epoch - 6ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.10053
90/90 - 1s - loss: 0.0031 - mse: 0.0031 - mae: 0.0437 - val_loss: 0.1063 - val_mse: 0.1063 - val_mae: 0.3005 - lr: 1.0000e-05 - 587ms/epoch - 7ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.10053
90/90 - 1s - loss: 0.0032 - mse: 0.0032 - mae: 0.0451 - val_loss: 0.1069 - val_mse: 0.1069 - val_mae: 0.3014 - lr: 1.0000e-05 - 618ms/epoch - 7ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.10053
90/90 - 1s - loss: 0.0032 - mse: 0.0032 - mae: 0.0448 - val_loss: 0.1066 - val_mse: 0.1066 - val_mae: 0.3010 - lr: 1.0000e-05 - 561ms/epoch - 6ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.10053
90/90 - 1s - loss: 0.0031 - mse: 0.0031 - mae: 0.0429 - val_loss: 0.1068 - val_mse: 0.1068 - val_mae: 0.3012 - lr: 1.0000e-05 - 600ms/epoch - 7ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.10053
90/90 - 1s - loss: 0.0032 - mse: 0.0032 - mae: 0.0441 - val_loss: 0.1076 - val_mse: 0.1076 - val_mae: 0.3024 - lr: 1.0000e-05 - 574ms/epoch - 6ms/step
Epoch 56/500

Epoch 00056: val_loss did not improve from 0.10053
90/90 - 1s - loss: 0.0033 - mse: 0.0033 - mae: 0.0455 - val_loss: 0.1080 - val_mse: 0.1080 - val_mae: 0.3031 - lr: 1.0000e-05 - 590ms/epoch - 7ms/step
Epoch 57/500

Epoch 00057: val_loss did not improve from 0.10053
90/90 - 1s - loss: 0.0036 - mse: 0.0036 - mae: 0.0472 - val_loss: 0.1087 - val_mse: 0.1087 - val_mae: 0.3040 - lr: 1.0000e-05 - 604ms/epoch - 7ms/step
Epoch 58/500

Epoch 00058: val_loss did not improve from 0.10053
90/90 - 1s - loss: 0.0031 - mse: 0.0031 - mae: 0.0450 - val_loss: 0.1091 - val_mse: 0.1091 - val_mae: 0.3045 - lr: 1.0000e-05 - 557ms/epoch - 6ms/step
Epoch 59/500

Epoch 00059: val_loss did not improve from 0.10053
90/90 - 1s - loss: 0.0032 - mse: 0.0032 - mae: 0.0440 - val_loss: 0.1087 - val_mse: 0.1087 - val_mae: 0.3039 - lr: 1.0000e-05 - 647ms/epoch - 7ms/step
Epoch 60/500

Epoch 00060: val_loss did not improve from 0.10053
90/90 - 1s - loss: 0.0027 - mse: 0.0027 - mae: 0.0421 - val_loss: 0.1094 - val_mse: 0.1094 - val_mae: 0.3050 - lr: 1.0000e-05 - 561ms/epoch - 6ms/step
Epoch 61/500

Epoch 00061: val_loss did not improve from 0.10053
90/90 - 1s - loss: 0.0034 - mse: 0.0034 - mae: 0.0451 - val_loss: 0.1099 - val_mse: 0.1099 - val_mae: 0.3057 - lr: 1.0000e-05 - 566ms/epoch - 6ms/step
Epoch 62/500

Epoch 00062: val_loss did not improve from 0.10053
90/90 - 1s - loss: 0.0031 - mse: 0.0031 - mae: 0.0438 - val_loss: 0.1107 - val_mse: 0.1107 - val_mae: 0.3068 - lr: 1.0000e-05 - 566ms/epoch - 6ms/step
Epoch 63/500

Epoch 00063: val_loss did not improve from 0.10053
90/90 - 1s - loss: 0.0031 - mse: 0.0031 - mae: 0.0436 - val_loss: 0.1110 - val_mse: 0.1110 - val_mae: 0.3072 - lr: 1.0000e-05 - 629ms/epoch - 7ms/step
Epoch 64/500

Epoch 00064: val_loss did not improve from 0.10053
90/90 - 1s - loss: 0.0034 - mse: 0.0034 - mae: 0.0454 - val_loss: 0.1100 - val_mse: 0.1100 - val_mae: 0.3058 - lr: 1.0000e-05 - 577ms/epoch - 6ms/step
Epoch 65/500

Epoch 00065: val_loss did not improve from 0.10053
90/90 - 1s - loss: 0.0030 - mse: 0.0030 - mae: 0.0434 - val_loss: 0.1098 - val_mse: 0.1098 - val_mae: 0.3055 - lr: 1.0000e-05 - 556ms/epoch - 6ms/step
Epoch 66/500

Epoch 00066: val_loss did not improve from 0.10053
90/90 - 1s - loss: 0.0030 - mse: 0.0030 - mae: 0.0425 - val_loss: 0.1101 - val_mse: 0.1101 - val_mae: 0.3059 - lr: 1.0000e-05 - 569ms/epoch - 6ms/step
Epoch 67/500

Epoch 00067: val_loss did not improve from 0.10053
90/90 - 1s - loss: 0.0032 - mse: 0.0032 - mae: 0.0448 - val_loss: 0.1100 - val_mse: 0.1100 - val_mae: 0.3057 - lr: 1.0000e-05 - 584ms/epoch - 6ms/step
Epoch 68/500

Epoch 00068: val_loss did not improve from 0.10053
90/90 - 1s - loss: 0.0032 - mse: 0.0032 - mae: 0.0439 - val_loss: 0.1109 - val_mse: 0.1109 - val_mae: 0.3070 - lr: 1.0000e-05 - 654ms/epoch - 7ms/step
Epoch 69/500

Epoch 00069: val_loss did not improve from 0.10053
90/90 - 1s - loss: 0.0035 - mse: 0.0035 - mae: 0.0465 - val_loss: 0.1112 - val_mse: 0.1112 - val_mae: 0.3075 - lr: 1.0000e-05 - 638ms/epoch - 7ms/step
Epoch 70/500

Epoch 00070: val_loss did not improve from 0.10053
90/90 - 1s - loss: 0.0034 - mse: 0.0034 - mae: 0.0460 - val_loss: 0.1107 - val_mse: 0.1107 - val_mae: 0.3066 - lr: 1.0000e-05 - 649ms/epoch - 7ms/step
Epoch 71/500

Epoch 00071: val_loss did not improve from 0.10053
90/90 - 1s - loss: 0.0034 - mse: 0.0034 - mae: 0.0467 - val_loss: 0.1105 - val_mse: 0.1105 - val_mae: 0.3064 - lr: 1.0000e-05 - 614ms/epoch - 7ms/step
Epoch 72/500

Epoch 00072: val_loss did not improve from 0.10053
90/90 - 1s - loss: 0.0032 - mse: 0.0032 - mae: 0.0452 - val_loss: 0.1115 - val_mse: 0.1115 - val_mae: 0.3078 - lr: 1.0000e-05 - 646ms/epoch - 7ms/step
Epoch 73/500

Epoch 00073: val_loss did not improve from 0.10053
90/90 - 1s - loss: 0.0032 - mse: 0.0032 - mae: 0.0450 - val_loss: 0.1121 - val_mse: 0.1121 - val_mae: 0.3087 - lr: 1.0000e-05 - 577ms/epoch - 6ms/step
Epoch 74/500

Epoch 00074: val_loss did not improve from 0.10053
90/90 - 1s - loss: 0.0029 - mse: 0.0029 - mae: 0.0429 - val_loss: 0.1126 - val_mse: 0.1126 - val_mae: 0.3093 - lr: 1.0000e-05 - 628ms/epoch - 7ms/step
Epoch 75/500

Epoch 00075: val_loss did not improve from 0.10053
90/90 - 1s - loss: 0.0031 - mse: 0.0031 - mae: 0.0433 - val_loss: 0.1123 - val_mse: 0.1123 - val_mae: 0.3089 - lr: 1.0000e-05 - 630ms/epoch - 7ms/step
Epoch 76/500

Epoch 00076: val_loss did not improve from 0.10053
90/90 - 1s - loss: 0.0031 - mse: 0.0031 - mae: 0.0439 - val_loss: 0.1126 - val_mse: 0.1126 - val_mae: 0.3094 - lr: 1.0000e-05 - 593ms/epoch - 7ms/step
Epoch 77/500

Epoch 00077: val_loss did not improve from 0.10053
90/90 - 1s - loss: 0.0030 - mse: 0.0030 - mae: 0.0424 - val_loss: 0.1139 - val_mse: 0.1139 - val_mae: 0.3112 - lr: 1.0000e-05 - 624ms/epoch - 7ms/step
Epoch 78/500

Epoch 00078: val_loss did not improve from 0.10053
90/90 - 1s - loss: 0.0030 - mse: 0.0030 - mae: 0.0435 - val_loss: 0.1145 - val_mse: 0.1145 - val_mae: 0.3120 - lr: 1.0000e-05 - 584ms/epoch - 6ms/step
Epoch 79/500

Epoch 00079: val_loss did not improve from 0.10053
90/90 - 1s - loss: 0.0029 - mse: 0.0029 - mae: 0.0427 - val_loss: 0.1149 - val_mse: 0.1149 - val_mae: 0.3127 - lr: 1.0000e-05 - 627ms/epoch - 7ms/step
Epoch 80/500

Epoch 00080: val_loss did not improve from 0.10053
90/90 - 1s - loss: 0.0033 - mse: 0.0033 - mae: 0.0452 - val_loss: 0.1143 - val_mse: 0.1143 - val_mae: 0.3118 - lr: 1.0000e-05 - 631ms/epoch - 7ms/step
Epoch 00080: early stopping
SMA
Prediction vs Close:		52.61% Accuracy
Prediction vs Prediction:	50.75% Accuracy
MSE:	 60.72793966434877 
RMSE:	 7.792813334370892 
MAPE:	 6.245110563496496

EMA
Prediction vs Close:		53.36% Accuracy
Prediction vs Prediction:	49.63% Accuracy
MSE:	 24.31424046870049 
RMSE:	 4.930947218202654 
MAPE:	 4.072073008160127

WMA
Prediction vs Close:		53.36% Accuracy
Prediction vs Prediction:	46.27% Accuracy
MSE:	 68.15170187847146 
RMSE:	 8.255404404296101 
MAPE:	 6.806281257852447

DEMA
Prediction vs Close:		51.49% Accuracy
Prediction vs Prediction:	48.51% Accuracy
MSE:	 43.922177447330554 
RMSE:	 6.627380888958364 
MAPE:	 5.414540694927783

KAMA
Prediction vs Close:		56.34% Accuracy
Prediction vs Prediction:	49.25% Accuracy
MSE:	 23.99643218488633 
RMSE:	 4.8986153334270215 
MAPE:	 3.8674202618000764

MIDPOINT
Prediction vs Close:		50.37% Accuracy
Prediction vs Prediction:	48.88% Accuracy
MSE:	 28.7191654730941 
RMSE:	 5.359026541555295 
MAPE:	 4.42030651732032

T3
Prediction vs Close:		51.87% Accuracy
Prediction vs Prediction:	47.39% Accuracy
MSE:	 57.05690799124099 
RMSE:	 7.553602318843705 
MAPE:	 6.064989593585796

TEMA
Prediction vs Close:		48.88% Accuracy
Prediction vs Prediction:	48.88% Accuracy
MSE:	 20.964688340236016 
RMSE:	 4.578721256009806 
MAPE:	 3.7212897315589664
Runtime: mins: 16.46426535278334

Architecture Used

In [ ]:
from google.colab import files
import cv2
uploaded = files.upload()
Upload widget is only available when the cell has been executed in the current browser session. Please rerun this cell to enable.
Saving Experiment3.png to Experiment3 (1).png
In [ ]:
img = cv2.imread('Experiment3.png')
plt.figure(figsize=(20,10))
plt.axis("off")
plt.title('LSTM Architecture '+imgfile,fontsize=18)
plt.imshow(img)
Out[ ]:
<matplotlib.image.AxesImage at 0x7fdfb2abd6d0>

Model Plots

In [75]:
with open('simulation3_data.json') as json_file:
    simulation3 = json.load(json_file)
fileimg = 'Experiment3'
In [76]:
for i in range(len(list(simulation3.keys()))):
  SIM = list(simulation3.keys())[i]
  plot_train(simulation3,SIM)
  plot_test(simulation3,SIM)
----- Train RMSE for SMA ----- 8.952567216610497
----- Train_MSE_LSTM for SMA ----- 80.14845976792903
----- Train MAE LSTM for SMA ----- 7.821491223024573
----- Test RMSE for SMA----- 7.792813334370892
----- Test_MSE_LSTM for SMA----- 60.72793966434877
----- Test_MAE_LSTM for SMA----- 6.245110563496496
----- Train RMSE for EMA ----- 10.570254973011156
----- Train_MSE_LSTM for EMA ----- 111.73029019446709
----- Train MAE LSTM for EMA ----- 9.451268005811858
----- Test RMSE for EMA----- 4.930947218202654
----- Test_MSE_LSTM for EMA----- 24.31424046870049
----- Test_MAE_LSTM for EMA----- 4.072073008160127
----- Train RMSE for WMA ----- 10.97436711794096
----- Train_MSE_LSTM for WMA ----- 120.43673363934376
----- Train MAE LSTM for WMA ----- 9.925815236695067
----- Test RMSE for WMA----- 8.255404404296101
----- Test_MSE_LSTM for WMA----- 68.15170187847146
----- Test_MAE_LSTM for WMA----- 6.806281257852447
----- Train RMSE for DEMA ----- 12.066299877592092
----- Train_MSE_LSTM for DEMA ----- 145.59559273597895
----- Train MAE LSTM for DEMA ----- 10.907355915123395
----- Test RMSE for DEMA----- 6.627380888958364
----- Test_MSE_LSTM for DEMA----- 43.922177447330554
----- Test_MAE_LSTM for DEMA----- 5.414540694927783
----- Train RMSE for KAMA ----- 10.869146791217638
----- Train_MSE_LSTM for KAMA ----- 118.13835196903668
----- Train MAE LSTM for KAMA ----- 9.759291692937742
----- Test RMSE for KAMA----- 4.8986153334270215
----- Test_MSE_LSTM for KAMA----- 23.99643218488633
----- Test_MAE_LSTM for KAMA----- 3.8674202618000764
----- Train RMSE for MIDPOINT ----- 9.691011765630286
----- Train_MSE_LSTM for MIDPOINT ----- 93.91570904158462
----- Train MAE LSTM for MIDPOINT ----- 8.650189046672622
----- Test RMSE for MIDPOINT----- 5.359026541555295
----- Test_MSE_LSTM for MIDPOINT----- 28.7191654730941
----- Test_MAE_LSTM for MIDPOINT----- 4.42030651732032
----- Train RMSE for T3 ----- 12.129650565307267
----- Train_MSE_LSTM for T3 ----- 147.12842283645892
----- Train MAE LSTM for T3 ----- 10.936634904781638
----- Test RMSE for T3----- 7.553602318843705
----- Test_MSE_LSTM for T3----- 57.05690799124099
----- Test_MAE_LSTM for T3----- 6.064989593585796
----- Train RMSE for TEMA ----- 7.376932734913034
----- Train_MSE_LSTM for TEMA ----- 54.419136575431494
----- Train MAE LSTM for TEMA ----- 5.050424731629302
----- Test RMSE for TEMA----- 4.578721256009806
----- Test_MSE_LSTM for TEMA----- 20.964688340236016
----- Test_MAE_LSTM for TEMA----- 3.7212897315589664
In [ ]:
for i in range(len(list(simulation3.keys()))):
  SIM = list(simulation3.keys())[i]
  plot_train(simulation3,SIM)
  plot_test(simulation3,SIM)
----- Train RMSE for SMA ----- 8.952567216610497
----- Train_MSE_LSTM for SMA ----- 80.14845976792903
----- Train MAE LSTM for SMA ----- 7.821491223024573
----- Test RMSE for SMA----- 7.792813334370892
----- Test_MSE_LSTM for SMA----- 60.72793966434877
----- Test_MAE_LSTM for SMA----- 6.245110563496496
----- Train RMSE for EMA ----- 10.570254973011156
----- Train_MSE_LSTM for EMA ----- 111.73029019446709
----- Train MAE LSTM for EMA ----- 9.451268005811858
----- Test RMSE for EMA----- 4.930947218202654
----- Test_MSE_LSTM for EMA----- 24.31424046870049
----- Test_MAE_LSTM for EMA----- 4.072073008160127
----- Train RMSE for WMA ----- 10.97436711794096
----- Train_MSE_LSTM for WMA ----- 120.43673363934376
----- Train MAE LSTM for WMA ----- 9.925815236695067
----- Test RMSE for WMA----- 8.255404404296101
----- Test_MSE_LSTM for WMA----- 68.15170187847146
----- Test_MAE_LSTM for WMA----- 6.806281257852447
----- Train RMSE for DEMA ----- 12.066299877592092
----- Train_MSE_LSTM for DEMA ----- 145.59559273597895
----- Train MAE LSTM for DEMA ----- 10.907355915123395
----- Test RMSE for DEMA----- 6.627380888958364
----- Test_MSE_LSTM for DEMA----- 43.922177447330554
----- Test_MAE_LSTM for DEMA----- 5.414540694927783
----- Train RMSE for KAMA ----- 10.869146791217638
----- Train_MSE_LSTM for KAMA ----- 118.13835196903668
----- Train MAE LSTM for KAMA ----- 9.759291692937742
----- Test RMSE for KAMA----- 4.8986153334270215
----- Test_MSE_LSTM for KAMA----- 23.99643218488633
----- Test_MAE_LSTM for KAMA----- 3.8674202618000764
----- Train RMSE for MIDPOINT ----- 9.691011765630286
----- Train_MSE_LSTM for MIDPOINT ----- 93.91570904158462
----- Train MAE LSTM for MIDPOINT ----- 8.650189046672622
----- Test RMSE for MIDPOINT----- 5.359026541555295
----- Test_MSE_LSTM for MIDPOINT----- 28.7191654730941
----- Test_MAE_LSTM for MIDPOINT----- 4.42030651732032
----- Train RMSE for T3 ----- 12.129650565307267
----- Train_MSE_LSTM for T3 ----- 147.12842283645892
----- Train MAE LSTM for T3 ----- 10.936634904781638
----- Test RMSE for T3----- 7.553602318843705
----- Test_MSE_LSTM for T3----- 57.05690799124099
----- Test_MAE_LSTM for T3----- 6.064989593585796
----- Train RMSE for TEMA ----- 7.376932734913034
----- Train_MSE_LSTM for TEMA ----- 54.419136575431494
----- Train MAE LSTM for TEMA ----- 5.050424731629302
----- Test RMSE for TEMA----- 4.578721256009806
----- Test_MSE_LSTM for TEMA----- 20.964688340236016
----- Test_MAE_LSTM for TEMA----- 3.7212897315589664

Univariate Arima Multistep MutiVariate LSTM Hybrid Model Experiment 4

From the above experiments it is evident that with Higher moving averages the loss plots show unreoresented data and underfitting, hence keeping only the MA's that have smaller periods like T3 OR TRIMA. Going forward EMA, WMA & DEMA will be ignored.

In [ ]:
def get_lstm(data,original_data, train_len, test_len,img_file,ma ,lstm_len=3):
    # prepare train and test data
    X_value = pd.DataFrame(data.iloc[:, :])
    y_value = pd.DataFrame(data.iloc[:, 3])
    X_scaler = MinMaxScaler(feature_range=(-1, 1))
    y_scaler = MinMaxScaler(feature_range=(-1, 1))
    X_scaler.fit(X_value)
    y_scaler.fit(y_value)
    # Get data and check shape
    X, y, yc = get_X_y(X_scale_dataset, y_scale_dataset)#X will be of shape 224 X 3 X 21 (each 3 X 21 array will be 3 days' worth of data). yc will have the corresponding closing price value
    # pdb.set_trace()
    X_train, X_test, = split_train_test(X)
    y_train, y_test, = split_train_test(y)
    # yc_train, yc_test, = split_train_test(original_data)
    index_train, index_test, = predict_index(dataset_final, X_train, n_steps_in, n_steps_out)
    det = 20
    input_dim = X_train.shape[1]#3
    feature_size = X_train.shape[2]#24
    output_dim = y_train.shape[1]#1



    # # Option 1
    # # Set up & fit LSTM RNN
    # model = Sequential()
    # model.add(LSTM(256, activation='relu', kernel_initializer='he_normal', input_shape=(input_dim, feature_size)))
    # model.add(Dense(units=64,activation='relu'))
    # model.add(Dropout(0.5))
    # model.add(Dense(units=output_dim))
    # model.compile(optimizer=Adam(learning_rate = 0.001), loss='mse')

    # ## Common code
    # callbacks = [
    # EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    # ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    # ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    # fname1 = img_file+'.png'
    # tensorflow.keras.utils.plot_model(
    #     model, to_file=fname1, show_shapes=True, show_dtype=False,
    #     show_layer_names=True, expand_nested=False, dpi=96,
    #     layer_range=None, show_layer_activations=False
    # )
    # history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # # plot loss
    # fname2 = img_file+'-'+ma
    # plt.title(img_file+'-'+ma_' Loss')
    # plt.xlabel("Epochs")
    # plt.ylabel("Loss")
    # pyplot.plot(history.history['loss'], label='train')
    # pyplot.plot(history.history['val_loss'], label='validation')
    # pyplot.legend()
    # pyplot.savefig(fname2+'.png',dpi='figure')
    # pyplot.show()


    # # option 2
    # model = Sequential()
    # model.add(Bidirectional(LSTM(units= 128), input_shape=(input_dim, feature_size)))
    # model.add(Dense(64))
    # model.add(Dense(units=output_dim))
    # model.compile(optimizer=Adam(lr = 0.001), loss='mean_squared_error', metrics=['accuracy'])
    # # Common code
    # callbacks = [
    # EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    # ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    # ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    # fname1 = img_file+'.png'
    # tensorflow.keras.utils.plot_model(
    #     model, to_file=fname1, show_shapes=True, show_dtype=False,
    #     show_layer_names=True, expand_nested=False, dpi=96,
    #     layer_range=None, show_layer_activations=False
    # )
    # history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # # plot loss
    # fname2 = img_file+'-'+ma
    # plt.title(img_file+'-'+ma_' Loss')
    # plt.xlabel("Epochs")
    # plt.ylabel("Loss")
    # pyplot.plot(history.history['loss'], label='train')
    # pyplot.plot(history.history['val_loss'], label='validation')
    # pyplot.legend()
    # pyplot.savefig(fname2+'.png',dpi='figure')
    # pyplot.show()




    # # Option 3
    # # define custom activation
    # # 
    # class Double_Tanh(Activation):
    #     def __init__(self, activation, **kwargs):
    #         super(Double_Tanh, self).__init__(activation, **kwargs)
    #         self.__name__ = 'double_tanh'

    # def double_tanh(x):
    #     return (K.tanh(x) * 2)

    # get_custom_objects().update({'double_tanh':Double_Tanh(double_tanh)})
    #     # Model Generation
    # model = Sequential()
    # #check https://machinelearningmastery.com/use-weight-regularization-lstm-networks-time-series-forecasting/
    # model.add(LSTM(25, input_shape=(input_dim, feature_size), dropout=0.2, kernel_regularizer=l1_l2(0.00,0.00), bias_regularizer=l1_l2(0.00,0.00)))
    # model.add(Dense(1))
    # model.add(Activation(double_tanh))
    # model.compile(loss='mean_squared_error', optimizer='adam', metrics=['mse', 'mae'])
    # # Common code
    # callbacks = [
    # EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    # ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    # ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    # fname1 = img_file+'.png'
    # tensorflow.keras.utils.plot_model(
    #     model, to_file=fname1, show_shapes=True, show_dtype=False,
    #     show_layer_names=True, expand_nested=False, dpi=96,
    #     layer_range=None, show_layer_activations=False
    # )
    # history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # # plot loss
    # fname2 = img_file+'-'+ma
    # plt.title(img_file+'-'+ma_' Loss')
    # plt.xlabel("Epochs")
    # plt.ylabel("Loss")
    # pyplot.plot(history.history['loss'], label='train')
    # pyplot.plot(history.history['val_loss'], label='validation')
    # pyplot.legend()
    # pyplot.savefig(fname2+'.png',dpi='figure')
    pyplot.show()

    # Option 4
    # Set up & fit LSTM RNN
    model = Sequential()
    model.add(LSTM(units=lstm_len, return_sequences=True, input_shape=(input_dim, feature_size)))
    model.add(LSTM(units=int(lstm_len/2)))
    model.add(Dense(1, activation='sigmoid'))
    model.compile(loss='mean_squared_error', optimizer='adam')
    # Common code
    callbacks = [
    EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    ModelCheckpoint('LSTM4.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    fname1 = img_file+'.png'
    tensorflow.keras.utils.plot_model(
        model, to_file=fname1, show_shapes=True, show_dtype=False,
        show_layer_names=True, expand_nested=False, dpi=96,
        layer_range=None, show_layer_activations=False
    )
    history = model.fit(X_train, y_train, epochs=500, batch_size=int( optimized_period[ma]), verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # plot loss
    fname2 = img_file+'-'+ma
    plt.title(img_file+'-'+ma+' Loss')
    plt.xlabel("Epochs")
    plt.ylabel("Loss")
    pyplot.plot(history.history['loss'], label='train')
    pyplot.plot(history.history['val_loss'], label='validation')
    pyplot.legend()
    pyplot.savefig(fname2+'.png',dpi='figure')
    pyplot.show()



    # Generate predictions
    predictiontr = model.predict(X_train, verbose=0)
    predictiontr = (y_scaler.inverse_transform(predictiontr)-det).tolist()
    outputtr = []
    for i in range(len(predictiontr)):
        outputtr.extend(predictiontr[i])
    predictiontr = outputtr
    # Generate error data

    ## replace with yc , xtest generated by new multistep method
    mse_tr = mean_squared_error(y_train, predictiontr)
    rmse_tr = mse_tr ** 0.5
    # mape = mean_absolute_percentage_error(X_test, pd.Series(predictiontr))
    mae_tr = mean_absolute_error(y_train, pd.Series(predictiontr))
    # Original_tr = pd.Series(yc_train)
    Original_tr = y_scaler.inverse_transform(y_train).flatten().tolist()


    predictionte = model.predict(X_test, verbose=0)
    predictionte =( y_scaler.inverse_transform(predictionte)-det).tolist()
    outputte = []
    for i in range(len(predictionte)):
        outputte.extend(predictionte[i])
    predictionte = outputte
    # Generate error data

    mse_te = mean_squared_error(y_test, predictionte)
    rmse_te = mse_te ** 0.5
    # mape = mean_absolute_percentage_error(X_test, pd.Series(predictionte))
    mae_te = mean_absolute_error(y_test, pd.Series(predictionte))
    # Original_te = pd.Series(yc_test)
    Original_te = y_scaler.inverse_transform(y_test).flatten().tolist()

    return Original_tr, predictiontr, mse_tr, rmse_tr,mae_tr,Original_te,predictionte, mse_te,rmse_te,mae_te
In [ ]:
if __name__ == '__main__':
    start_time = timeit.default_timer()
    simulation4 = {}
    imgfile = 'Experiment4'
    for ma in optimized_period:
              print(ma)
              print(functions[ma])
              print ( int( optimized_period[ma]))
            # if ma == 'SMA':
              low_vol = df.apply(lambda c:  functions[ma](c, timeperiod = int( optimized_period[ma])))
              low_vol = low_vol.fillna(0)
              low_vol_data = df['close']
              high_vol = pd.DataFrame()
              df2 = df.copy()
              for i in df2.columns:
                if i in low_vol.columns:
                  high_vol[i] = df2[i].subtract(low_vol[i], fill_value=0)
              high_vol_data = df['close']
              ## *****************************************************
              # Generate ARIMA and LSTM predictions
              print('\nWorking on ' + ma + ' predictions')
              try:
                print('parameters used : ', train_len, test_len)
                low_vol_Original, low_vol_prediction, low_vol_mse, low_vol_rmse,low_vol_mae = get_arima(low_vol,low_vol_data, train_len, test_len)
              except:
                  print('ARIMA error, skipping to next MA type')
                  continue
              Original_tr, predictiontr, mse_tr, rmse_tr,mae_tr,high_vol_Original, high_vol_prediction, high_vol_mse, high_vol_rmse,high_vol_mae, = get_lstm(high_vol,high_vol_data, train_len, test_len,imgfile,ma)
              final_prediction_tr = df['close'].head(train_len).values + pd.Series(predictiontr) # ignoring first 3 steps 
              mse_ftr = mean_squared_error(df['close'].head(train_len).values,final_prediction_tr.values)
              rmse_ftr = mse_ftr ** 0.5
              mape_ftr = mean_absolute_percentage_error(df['close'].head(train_len).reset_index(drop=True), final_prediction_tr)
              mae_ftr = mean_absolute_error(df['close'].head(train_len).reset_index(drop=True), final_prediction_tr)

              final_prediction = pd.Series(low_vol_prediction[3:]) + pd.Series(high_vol_prediction)
              mse = mean_squared_error(df['close'].tail(test_len).values,final_prediction.values)
              rmse = mse ** 0.5
              mape = mean_absolute_percentage_error(df['close'].tail(test_len).reset_index(drop=True), final_prediction)
              mae = mean_absolute_error(df['close'].tail(test_len).reset_index(drop=True), final_prediction)
              # Generate prediction accuracy
              actual = df['close'].tail(test_len).values
              result_1 = []
              result_2 = []
              for i in range(1, len(final_prediction)):
                  # Compare prediction to previous close price
                  if final_prediction[i] > actual[i-1] and actual[i] > actual[i-1]:
                      result_1.append(1)
                  elif final_prediction[i] < actual[i-1] and actual[i] < actual[i-1]:
                      result_1.append(1)
                  else:
                      result_1.append(0)

                  # Compare prediction to previous prediction
                  if final_prediction[i] > final_prediction[i-1] and actual[i] > actual[i-1]:
                      result_2.append(1)
                  elif final_prediction[i] < final_prediction[i-1] and actual[i] < actual[i-1]:
                      result_2.append(1)
                  else:
                      result_2.append(0)

              accuracy_1 = np.mean(result_1)
              accuracy_2 = np.mean(result_2)

              simulation4[ma] = {'low_vol': {'original':list(low_vol_Original), 'prediction': list(low_vol_prediction) , 'mse': low_vol_mse,
                                            'rmse': low_vol_rmse, 'mae' : low_vol_mae},
                                'high_vol': {'original':list(high_vol_Original),'prediction': list(high_vol_prediction), 'mse': high_vol_mse,
                                            'rmse': high_vol_rmse, 'mae' : high_vol_mae},
                                'final_tr': {'original':df['close'].head(train_len).tolist(),'prediction': final_prediction_tr.values.tolist(), 'mse': mse_ftr,
                                            'rmse': rmse_ftr, 'mae' : mae_ftr},
                                'final': {'original': df['close'].tail(test_len).tolist(), 'prediction': final_prediction.values.tolist(), 'mse': mse,
                                          'rmse': rmse, 'mae': mae },
                                'accuracy': {'prediction vs close': accuracy_1, 'prediction vs prediction': accuracy_2}}

              # save simulation data here as checkpoint
              with open('simulation4_data.json', 'w') as fp:
                  json.dump(simulation4, fp)

              for ma in simulation4.keys():
                  print('\n' + ma)
                  print('Prediction vs Close:\t\t' + str(round(100*simulation4[ma]['accuracy']['prediction vs close'], 2))
                        + '% Accuracy')
                  print('Prediction vs Prediction:\t' + str(round(100*simulation4[ma]['accuracy']['prediction vs prediction'], 2))
                        + '% Accuracy')
                  print('MSE:\t', simulation4[ma]['final']['mse'],
                        '\nRMSE:\t', simulation4[ma]['final']['rmse'],
                        '\nMAPE:\t', simulation4[ma]['final']['mae'])#,
                        # '\nMAPE:\t', simulation[ma]['final']['mape'])
            # else:
            #   break
    elapsed = timeit.default_timer() - start_time
    print('Runtime: mins:',elapsed/60)
SMA
SMA([input_arrays], [timeperiod=30])

Simple Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
17

Working on SMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.37 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4157.020, Time=0.02 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3687.148, Time=0.02 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.12 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3458.651, Time=0.04 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3322.133, Time=0.06 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=0.39 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=0.44 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3324.133, Time=0.12 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 1.570 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1657.067
Date:                Sun, 12 Dec 2021   AIC                           3322.133
Time:                        13:12:09   BIC                           3340.897
Sample:                             0   HQIC                          3329.339
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1966      0.003   -387.226      0.000      -1.203      -1.191
ar.L2         -0.8952      0.006   -138.692      0.000      -0.908      -0.883
ar.L3         -0.3968      0.006    -68.284      0.000      -0.408      -0.385
sigma2         3.5858      0.017    214.535      0.000       3.553       3.619
===================================================================================
Ljung-Box (L1) (Q):                  14.47   Jarque-Bera (JB):           2428881.42
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                             3.90
Prob(H) (two-sided):                  0.00   Kurtosis:                       271.99
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.05330, saving model to LSTM4.h5
48/48 - 6s - loss: 1.4217 - val_loss: 0.0533 - lr: 0.0010 - 6s/epoch - 129ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.05330
48/48 - 0s - loss: 1.3808 - val_loss: 0.0552 - lr: 0.0010 - 223ms/epoch - 5ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.05330
48/48 - 0s - loss: 1.2868 - val_loss: 0.0567 - lr: 0.0010 - 218ms/epoch - 5ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.05330
48/48 - 0s - loss: 1.1463 - val_loss: 0.0611 - lr: 0.0010 - 222ms/epoch - 5ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.05330
48/48 - 0s - loss: 1.0599 - val_loss: 0.0659 - lr: 0.0010 - 220ms/epoch - 5ms/step
Epoch 6/500

Epoch 00006: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00006: val_loss did not improve from 0.05330
48/48 - 0s - loss: 1.0010 - val_loss: 0.0709 - lr: 0.0010 - 227ms/epoch - 5ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.05330
48/48 - 0s - loss: 0.9705 - val_loss: 0.0714 - lr: 1.0000e-04 - 223ms/epoch - 5ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.05330
48/48 - 0s - loss: 0.9661 - val_loss: 0.0719 - lr: 1.0000e-04 - 215ms/epoch - 4ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.05330
48/48 - 0s - loss: 0.9619 - val_loss: 0.0725 - lr: 1.0000e-04 - 226ms/epoch - 5ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.05330
48/48 - 0s - loss: 0.9577 - val_loss: 0.0730 - lr: 1.0000e-04 - 227ms/epoch - 5ms/step
Epoch 11/500

Epoch 00011: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00011: val_loss did not improve from 0.05330
48/48 - 0s - loss: 0.9537 - val_loss: 0.0736 - lr: 1.0000e-04 - 222ms/epoch - 5ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.05330
48/48 - 0s - loss: 0.9511 - val_loss: 0.0737 - lr: 1.0000e-05 - 216ms/epoch - 5ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.05330
48/48 - 0s - loss: 0.9507 - val_loss: 0.0737 - lr: 1.0000e-05 - 218ms/epoch - 5ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.05330
48/48 - 0s - loss: 0.9503 - val_loss: 0.0738 - lr: 1.0000e-05 - 221ms/epoch - 5ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.05330
48/48 - 0s - loss: 0.9499 - val_loss: 0.0738 - lr: 1.0000e-05 - 224ms/epoch - 5ms/step
Epoch 16/500

Epoch 00016: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00016: val_loss did not improve from 0.05330
48/48 - 0s - loss: 0.9495 - val_loss: 0.0739 - lr: 1.0000e-05 - 223ms/epoch - 5ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.05330
48/48 - 0s - loss: 0.9491 - val_loss: 0.0740 - lr: 1.0000e-05 - 214ms/epoch - 4ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.05330
48/48 - 0s - loss: 0.9487 - val_loss: 0.0740 - lr: 1.0000e-05 - 220ms/epoch - 5ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.05330
48/48 - 0s - loss: 0.9483 - val_loss: 0.0741 - lr: 1.0000e-05 - 230ms/epoch - 5ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.05330
48/48 - 0s - loss: 0.9479 - val_loss: 0.0742 - lr: 1.0000e-05 - 220ms/epoch - 5ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.05330
48/48 - 0s - loss: 0.9475 - val_loss: 0.0742 - lr: 1.0000e-05 - 217ms/epoch - 5ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.05330
48/48 - 0s - loss: 0.9471 - val_loss: 0.0743 - lr: 1.0000e-05 - 216ms/epoch - 4ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.05330
48/48 - 0s - loss: 0.9466 - val_loss: 0.0744 - lr: 1.0000e-05 - 214ms/epoch - 4ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.05330
48/48 - 0s - loss: 0.9462 - val_loss: 0.0744 - lr: 1.0000e-05 - 226ms/epoch - 5ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.05330
48/48 - 0s - loss: 0.9458 - val_loss: 0.0745 - lr: 1.0000e-05 - 211ms/epoch - 4ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.05330
48/48 - 0s - loss: 0.9454 - val_loss: 0.0746 - lr: 1.0000e-05 - 219ms/epoch - 5ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.05330
48/48 - 0s - loss: 0.9450 - val_loss: 0.0747 - lr: 1.0000e-05 - 221ms/epoch - 5ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.05330
48/48 - 0s - loss: 0.9446 - val_loss: 0.0747 - lr: 1.0000e-05 - 221ms/epoch - 5ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.05330
48/48 - 0s - loss: 0.9442 - val_loss: 0.0748 - lr: 1.0000e-05 - 219ms/epoch - 5ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.05330
48/48 - 0s - loss: 0.9438 - val_loss: 0.0749 - lr: 1.0000e-05 - 220ms/epoch - 5ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.05330
48/48 - 0s - loss: 0.9433 - val_loss: 0.0750 - lr: 1.0000e-05 - 212ms/epoch - 4ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.05330
48/48 - 0s - loss: 0.9429 - val_loss: 0.0750 - lr: 1.0000e-05 - 213ms/epoch - 4ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.05330
48/48 - 0s - loss: 0.9425 - val_loss: 0.0751 - lr: 1.0000e-05 - 222ms/epoch - 5ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.05330
48/48 - 0s - loss: 0.9421 - val_loss: 0.0752 - lr: 1.0000e-05 - 215ms/epoch - 4ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.05330
48/48 - 0s - loss: 0.9417 - val_loss: 0.0753 - lr: 1.0000e-05 - 207ms/epoch - 4ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.05330
48/48 - 0s - loss: 0.9413 - val_loss: 0.0753 - lr: 1.0000e-05 - 225ms/epoch - 5ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.05330
48/48 - 0s - loss: 0.9409 - val_loss: 0.0754 - lr: 1.0000e-05 - 218ms/epoch - 5ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.05330
48/48 - 0s - loss: 0.9405 - val_loss: 0.0755 - lr: 1.0000e-05 - 214ms/epoch - 4ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.05330
48/48 - 0s - loss: 0.9400 - val_loss: 0.0756 - lr: 1.0000e-05 - 208ms/epoch - 4ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.05330
48/48 - 0s - loss: 0.9396 - val_loss: 0.0756 - lr: 1.0000e-05 - 220ms/epoch - 5ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.05330
48/48 - 0s - loss: 0.9392 - val_loss: 0.0757 - lr: 1.0000e-05 - 221ms/epoch - 5ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.05330
48/48 - 0s - loss: 0.9388 - val_loss: 0.0758 - lr: 1.0000e-05 - 216ms/epoch - 4ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.05330
48/48 - 0s - loss: 0.9384 - val_loss: 0.0759 - lr: 1.0000e-05 - 221ms/epoch - 5ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.05330
48/48 - 0s - loss: 0.9380 - val_loss: 0.0760 - lr: 1.0000e-05 - 220ms/epoch - 5ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.05330
48/48 - 0s - loss: 0.9376 - val_loss: 0.0761 - lr: 1.0000e-05 - 219ms/epoch - 5ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.05330
48/48 - 0s - loss: 0.9371 - val_loss: 0.0761 - lr: 1.0000e-05 - 214ms/epoch - 4ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.05330
48/48 - 0s - loss: 0.9367 - val_loss: 0.0762 - lr: 1.0000e-05 - 227ms/epoch - 5ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.05330
48/48 - 0s - loss: 0.9363 - val_loss: 0.0763 - lr: 1.0000e-05 - 223ms/epoch - 5ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.05330
48/48 - 0s - loss: 0.9359 - val_loss: 0.0764 - lr: 1.0000e-05 - 222ms/epoch - 5ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.05330
48/48 - 0s - loss: 0.9355 - val_loss: 0.0765 - lr: 1.0000e-05 - 225ms/epoch - 5ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.05330
48/48 - 0s - loss: 0.9351 - val_loss: 0.0766 - lr: 1.0000e-05 - 215ms/epoch - 4ms/step
Epoch 00051: early stopping
SMA
Prediction vs Close:		55.22% Accuracy
Prediction vs Prediction:	50.37% Accuracy
MSE:	 23.31830336349202 
RMSE:	 4.828902915103183 
MAPE:	 3.806885992834059
EMA
EMA([input_arrays], [timeperiod=30])

Exponential Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
51

Working on EMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.30 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4231.556, Time=0.02 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3761.238, Time=0.02 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.15 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3532.227, Time=0.04 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3394.496, Time=0.09 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=0.81 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=0.41 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3396.496, Time=0.21 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 2.055 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1693.248
Date:                Sun, 12 Dec 2021   AIC                           3394.496
Time:                        13:13:27   BIC                           3413.260
Sample:                             0   HQIC                          3401.702
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1982      0.003   -389.569      0.000      -1.204      -1.192
ar.L2         -0.8976      0.006   -139.811      0.000      -0.910      -0.885
ar.L3         -0.3984      0.006    -68.662      0.000      -0.410      -0.387
sigma2         3.9230      0.018    215.372      0.000       3.887       3.959
===================================================================================
Ljung-Box (L1) (Q):                  14.54   Jarque-Bera (JB):           2462173.05
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                             3.90
Prob(H) (two-sided):                  0.00   Kurtosis:                       273.82
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.05737, saving model to LSTM4.h5
16/16 - 3s - loss: 1.5587 - val_loss: 0.0574 - lr: 0.0010 - 3s/epoch - 208ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.05737
16/16 - 0s - loss: 1.5128 - val_loss: 0.0594 - lr: 0.0010 - 88ms/epoch - 5ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.05737
16/16 - 0s - loss: 1.4708 - val_loss: 0.0613 - lr: 0.0010 - 87ms/epoch - 5ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.05737
16/16 - 0s - loss: 1.4320 - val_loss: 0.0632 - lr: 0.0010 - 93ms/epoch - 6ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.05737
16/16 - 0s - loss: 1.3971 - val_loss: 0.0651 - lr: 0.0010 - 100ms/epoch - 6ms/step
Epoch 6/500

Epoch 00006: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00006: val_loss did not improve from 0.05737
16/16 - 0s - loss: 1.3669 - val_loss: 0.0669 - lr: 0.0010 - 91ms/epoch - 6ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.05737
16/16 - 0s - loss: 1.3490 - val_loss: 0.0671 - lr: 1.0000e-04 - 88ms/epoch - 6ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.05737
16/16 - 0s - loss: 1.3465 - val_loss: 0.0672 - lr: 1.0000e-04 - 84ms/epoch - 5ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.05737
16/16 - 0s - loss: 1.3440 - val_loss: 0.0674 - lr: 1.0000e-04 - 99ms/epoch - 6ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.05737
16/16 - 0s - loss: 1.3415 - val_loss: 0.0676 - lr: 1.0000e-04 - 89ms/epoch - 6ms/step
Epoch 11/500

Epoch 00011: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00011: val_loss did not improve from 0.05737
16/16 - 0s - loss: 1.3391 - val_loss: 0.0678 - lr: 1.0000e-04 - 95ms/epoch - 6ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.05737
16/16 - 0s - loss: 1.3375 - val_loss: 0.0678 - lr: 1.0000e-05 - 90ms/epoch - 6ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.05737
16/16 - 0s - loss: 1.3372 - val_loss: 0.0678 - lr: 1.0000e-05 - 89ms/epoch - 6ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.05737
16/16 - 0s - loss: 1.3370 - val_loss: 0.0679 - lr: 1.0000e-05 - 94ms/epoch - 6ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.05737
16/16 - 0s - loss: 1.3367 - val_loss: 0.0679 - lr: 1.0000e-05 - 96ms/epoch - 6ms/step
Epoch 16/500

Epoch 00016: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00016: val_loss did not improve from 0.05737
16/16 - 0s - loss: 1.3365 - val_loss: 0.0679 - lr: 1.0000e-05 - 91ms/epoch - 6ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.05737
16/16 - 0s - loss: 1.3363 - val_loss: 0.0679 - lr: 1.0000e-05 - 96ms/epoch - 6ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.05737
16/16 - 0s - loss: 1.3360 - val_loss: 0.0679 - lr: 1.0000e-05 - 101ms/epoch - 6ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.05737
16/16 - 0s - loss: 1.3358 - val_loss: 0.0680 - lr: 1.0000e-05 - 95ms/epoch - 6ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.05737
16/16 - 0s - loss: 1.3355 - val_loss: 0.0680 - lr: 1.0000e-05 - 92ms/epoch - 6ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.05737
16/16 - 0s - loss: 1.3353 - val_loss: 0.0680 - lr: 1.0000e-05 - 86ms/epoch - 5ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.05737
16/16 - 0s - loss: 1.3350 - val_loss: 0.0680 - lr: 1.0000e-05 - 88ms/epoch - 6ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.05737
16/16 - 0s - loss: 1.3348 - val_loss: 0.0680 - lr: 1.0000e-05 - 86ms/epoch - 5ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.05737
16/16 - 0s - loss: 1.3345 - val_loss: 0.0681 - lr: 1.0000e-05 - 86ms/epoch - 5ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.05737
16/16 - 0s - loss: 1.3343 - val_loss: 0.0681 - lr: 1.0000e-05 - 88ms/epoch - 6ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.05737
16/16 - 0s - loss: 1.3340 - val_loss: 0.0681 - lr: 1.0000e-05 - 88ms/epoch - 6ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.05737
16/16 - 0s - loss: 1.3338 - val_loss: 0.0681 - lr: 1.0000e-05 - 89ms/epoch - 6ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.05737
16/16 - 0s - loss: 1.3335 - val_loss: 0.0681 - lr: 1.0000e-05 - 90ms/epoch - 6ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.05737
16/16 - 0s - loss: 1.3333 - val_loss: 0.0682 - lr: 1.0000e-05 - 86ms/epoch - 5ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.05737
16/16 - 0s - loss: 1.3330 - val_loss: 0.0682 - lr: 1.0000e-05 - 81ms/epoch - 5ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.05737
16/16 - 0s - loss: 1.3328 - val_loss: 0.0682 - lr: 1.0000e-05 - 87ms/epoch - 5ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.05737
16/16 - 0s - loss: 1.3325 - val_loss: 0.0682 - lr: 1.0000e-05 - 87ms/epoch - 5ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.05737
16/16 - 0s - loss: 1.3323 - val_loss: 0.0683 - lr: 1.0000e-05 - 85ms/epoch - 5ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.05737
16/16 - 0s - loss: 1.3320 - val_loss: 0.0683 - lr: 1.0000e-05 - 89ms/epoch - 6ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.05737
16/16 - 0s - loss: 1.3318 - val_loss: 0.0683 - lr: 1.0000e-05 - 86ms/epoch - 5ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.05737
16/16 - 0s - loss: 1.3315 - val_loss: 0.0683 - lr: 1.0000e-05 - 89ms/epoch - 6ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.05737
16/16 - 0s - loss: 1.3313 - val_loss: 0.0684 - lr: 1.0000e-05 - 85ms/epoch - 5ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.05737
16/16 - 0s - loss: 1.3310 - val_loss: 0.0684 - lr: 1.0000e-05 - 86ms/epoch - 5ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.05737
16/16 - 0s - loss: 1.3308 - val_loss: 0.0684 - lr: 1.0000e-05 - 87ms/epoch - 5ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.05737
16/16 - 0s - loss: 1.3305 - val_loss: 0.0684 - lr: 1.0000e-05 - 85ms/epoch - 5ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.05737
16/16 - 0s - loss: 1.3303 - val_loss: 0.0684 - lr: 1.0000e-05 - 84ms/epoch - 5ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.05737
16/16 - 0s - loss: 1.3300 - val_loss: 0.0685 - lr: 1.0000e-05 - 91ms/epoch - 6ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.05737
16/16 - 0s - loss: 1.3298 - val_loss: 0.0685 - lr: 1.0000e-05 - 81ms/epoch - 5ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.05737
16/16 - 0s - loss: 1.3295 - val_loss: 0.0685 - lr: 1.0000e-05 - 86ms/epoch - 5ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.05737
16/16 - 0s - loss: 1.3293 - val_loss: 0.0685 - lr: 1.0000e-05 - 86ms/epoch - 5ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.05737
16/16 - 0s - loss: 1.3290 - val_loss: 0.0686 - lr: 1.0000e-05 - 87ms/epoch - 5ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.05737
16/16 - 0s - loss: 1.3288 - val_loss: 0.0686 - lr: 1.0000e-05 - 84ms/epoch - 5ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.05737
16/16 - 0s - loss: 1.3285 - val_loss: 0.0686 - lr: 1.0000e-05 - 95ms/epoch - 6ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.05737
16/16 - 0s - loss: 1.3283 - val_loss: 0.0686 - lr: 1.0000e-05 - 96ms/epoch - 6ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.05737
16/16 - 0s - loss: 1.3280 - val_loss: 0.0686 - lr: 1.0000e-05 - 87ms/epoch - 5ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.05737
16/16 - 0s - loss: 1.3278 - val_loss: 0.0687 - lr: 1.0000e-05 - 93ms/epoch - 6ms/step
Epoch 00051: early stopping
SMA
Prediction vs Close:		55.22% Accuracy
Prediction vs Prediction:	50.37% Accuracy
MSE:	 23.31830336349202 
RMSE:	 4.828902915103183 
MAPE:	 3.806885992834059

EMA
Prediction vs Close:		55.97% Accuracy
Prediction vs Prediction:	47.76% Accuracy
MSE:	 31.4391560756623 
RMSE:	 5.607063052584865 
MAPE:	 4.398444723456604
WMA
WMA([input_arrays], [timeperiod=30])

Weighted Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
49

Working on WMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.28 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4264.089, Time=0.02 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3793.930, Time=0.02 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.13 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3564.923, Time=0.06 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3427.258, Time=0.08 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=1.19 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=0.31 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3429.258, Time=0.17 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 2.256 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1709.629
Date:                Sun, 12 Dec 2021   AIC                           3427.258
Time:                        13:14:36   BIC                           3446.021
Sample:                             0   HQIC                          3434.464
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1981      0.003   -389.386      0.000      -1.204      -1.192
ar.L2         -0.8974      0.006   -139.699      0.000      -0.910      -0.885
ar.L3         -0.3983      0.006    -68.737      0.000      -0.410      -0.387
sigma2         4.0860      0.019    215.311      0.000       4.049       4.123
===================================================================================
Ljung-Box (L1) (Q):                  14.57   Jarque-Bera (JB):           2460901.70
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                             3.90
Prob(H) (two-sided):                  0.00   Kurtosis:                       273.75
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.04825, saving model to LSTM4.h5
17/17 - 3s - loss: 1.4220 - val_loss: 0.0483 - lr: 0.0010 - 3s/epoch - 199ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.04825
17/17 - 0s - loss: 1.4044 - val_loss: 0.0489 - lr: 0.0010 - 93ms/epoch - 5ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.04825
17/17 - 0s - loss: 1.3853 - val_loss: 0.0496 - lr: 0.0010 - 95ms/epoch - 6ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.04825
17/17 - 0s - loss: 1.3646 - val_loss: 0.0504 - lr: 0.0010 - 87ms/epoch - 5ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.04825
17/17 - 0s - loss: 1.3430 - val_loss: 0.0512 - lr: 0.0010 - 93ms/epoch - 5ms/step
Epoch 6/500

Epoch 00006: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00006: val_loss did not improve from 0.04825
17/17 - 0s - loss: 1.3211 - val_loss: 0.0521 - lr: 0.0010 - 90ms/epoch - 5ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.04825
17/17 - 0s - loss: 1.3068 - val_loss: 0.0522 - lr: 1.0000e-04 - 95ms/epoch - 6ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.04825
17/17 - 0s - loss: 1.3047 - val_loss: 0.0523 - lr: 1.0000e-04 - 94ms/epoch - 6ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.04825
17/17 - 0s - loss: 1.3026 - val_loss: 0.0524 - lr: 1.0000e-04 - 92ms/epoch - 5ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.04825
17/17 - 0s - loss: 1.3005 - val_loss: 0.0525 - lr: 1.0000e-04 - 88ms/epoch - 5ms/step
Epoch 11/500

Epoch 00011: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00011: val_loss did not improve from 0.04825
17/17 - 0s - loss: 1.2985 - val_loss: 0.0526 - lr: 1.0000e-04 - 90ms/epoch - 5ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.04825
17/17 - 0s - loss: 1.2972 - val_loss: 0.0526 - lr: 1.0000e-05 - 92ms/epoch - 5ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.04825
17/17 - 0s - loss: 1.2970 - val_loss: 0.0526 - lr: 1.0000e-05 - 92ms/epoch - 5ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.04825
17/17 - 0s - loss: 1.2968 - val_loss: 0.0526 - lr: 1.0000e-05 - 84ms/epoch - 5ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.04825
17/17 - 0s - loss: 1.2966 - val_loss: 0.0526 - lr: 1.0000e-05 - 92ms/epoch - 5ms/step
Epoch 16/500

Epoch 00016: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00016: val_loss did not improve from 0.04825
17/17 - 0s - loss: 1.2964 - val_loss: 0.0526 - lr: 1.0000e-05 - 89ms/epoch - 5ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.04825
17/17 - 0s - loss: 1.2962 - val_loss: 0.0526 - lr: 1.0000e-05 - 88ms/epoch - 5ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.04825
17/17 - 0s - loss: 1.2960 - val_loss: 0.0526 - lr: 1.0000e-05 - 89ms/epoch - 5ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.04825
17/17 - 0s - loss: 1.2958 - val_loss: 0.0526 - lr: 1.0000e-05 - 93ms/epoch - 5ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.04825
17/17 - 0s - loss: 1.2956 - val_loss: 0.0527 - lr: 1.0000e-05 - 91ms/epoch - 5ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.04825
17/17 - 0s - loss: 1.2954 - val_loss: 0.0527 - lr: 1.0000e-05 - 92ms/epoch - 5ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.04825
17/17 - 0s - loss: 1.2952 - val_loss: 0.0527 - lr: 1.0000e-05 - 90ms/epoch - 5ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.04825
17/17 - 0s - loss: 1.2950 - val_loss: 0.0527 - lr: 1.0000e-05 - 90ms/epoch - 5ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.04825
17/17 - 0s - loss: 1.2948 - val_loss: 0.0527 - lr: 1.0000e-05 - 87ms/epoch - 5ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.04825
17/17 - 0s - loss: 1.2946 - val_loss: 0.0527 - lr: 1.0000e-05 - 89ms/epoch - 5ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.04825
17/17 - 0s - loss: 1.2944 - val_loss: 0.0527 - lr: 1.0000e-05 - 89ms/epoch - 5ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.04825
17/17 - 0s - loss: 1.2942 - val_loss: 0.0527 - lr: 1.0000e-05 - 98ms/epoch - 6ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.04825
17/17 - 0s - loss: 1.2940 - val_loss: 0.0527 - lr: 1.0000e-05 - 94ms/epoch - 6ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.04825
17/17 - 0s - loss: 1.2938 - val_loss: 0.0528 - lr: 1.0000e-05 - 96ms/epoch - 6ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.04825
17/17 - 0s - loss: 1.2936 - val_loss: 0.0528 - lr: 1.0000e-05 - 92ms/epoch - 5ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.04825
17/17 - 0s - loss: 1.2934 - val_loss: 0.0528 - lr: 1.0000e-05 - 89ms/epoch - 5ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.04825
17/17 - 0s - loss: 1.2932 - val_loss: 0.0528 - lr: 1.0000e-05 - 89ms/epoch - 5ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.04825
17/17 - 0s - loss: 1.2931 - val_loss: 0.0528 - lr: 1.0000e-05 - 98ms/epoch - 6ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.04825
17/17 - 0s - loss: 1.2929 - val_loss: 0.0528 - lr: 1.0000e-05 - 97ms/epoch - 6ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.04825
17/17 - 0s - loss: 1.2927 - val_loss: 0.0528 - lr: 1.0000e-05 - 95ms/epoch - 6ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.04825
17/17 - 0s - loss: 1.2925 - val_loss: 0.0528 - lr: 1.0000e-05 - 88ms/epoch - 5ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.04825
17/17 - 0s - loss: 1.2923 - val_loss: 0.0528 - lr: 1.0000e-05 - 92ms/epoch - 5ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.04825
17/17 - 0s - loss: 1.2921 - val_loss: 0.0528 - lr: 1.0000e-05 - 100ms/epoch - 6ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.04825
17/17 - 0s - loss: 1.2919 - val_loss: 0.0529 - lr: 1.0000e-05 - 94ms/epoch - 6ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.04825
17/17 - 0s - loss: 1.2917 - val_loss: 0.0529 - lr: 1.0000e-05 - 96ms/epoch - 6ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.04825
17/17 - 0s - loss: 1.2915 - val_loss: 0.0529 - lr: 1.0000e-05 - 93ms/epoch - 5ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.04825
17/17 - 0s - loss: 1.2913 - val_loss: 0.0529 - lr: 1.0000e-05 - 97ms/epoch - 6ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.04825
17/17 - 0s - loss: 1.2911 - val_loss: 0.0529 - lr: 1.0000e-05 - 93ms/epoch - 5ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.04825
17/17 - 0s - loss: 1.2909 - val_loss: 0.0529 - lr: 1.0000e-05 - 105ms/epoch - 6ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.04825
17/17 - 0s - loss: 1.2907 - val_loss: 0.0529 - lr: 1.0000e-05 - 94ms/epoch - 6ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.04825
17/17 - 0s - loss: 1.2905 - val_loss: 0.0529 - lr: 1.0000e-05 - 93ms/epoch - 5ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.04825
17/17 - 0s - loss: 1.2903 - val_loss: 0.0529 - lr: 1.0000e-05 - 97ms/epoch - 6ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.04825
17/17 - 0s - loss: 1.2901 - val_loss: 0.0530 - lr: 1.0000e-05 - 92ms/epoch - 5ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.04825
17/17 - 0s - loss: 1.2899 - val_loss: 0.0530 - lr: 1.0000e-05 - 97ms/epoch - 6ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.04825
17/17 - 0s - loss: 1.2898 - val_loss: 0.0530 - lr: 1.0000e-05 - 94ms/epoch - 6ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.04825
17/17 - 0s - loss: 1.2896 - val_loss: 0.0530 - lr: 1.0000e-05 - 98ms/epoch - 6ms/step
Epoch 00051: early stopping
SMA
Prediction vs Close:		55.22% Accuracy
Prediction vs Prediction:	50.37% Accuracy
MSE:	 23.31830336349202 
RMSE:	 4.828902915103183 
MAPE:	 3.806885992834059

EMA
Prediction vs Close:		55.97% Accuracy
Prediction vs Prediction:	47.76% Accuracy
MSE:	 31.4391560756623 
RMSE:	 5.607063052584865 
MAPE:	 4.398444723456604

WMA
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	46.64% Accuracy
MSE:	 48.85272948439791 
RMSE:	 6.989472761546318 
MAPE:	 5.616901258925532
DEMA
DEMA([input_arrays], [timeperiod=30])

Double Exponential Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
89

Working on DEMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.28 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4436.126, Time=0.02 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3965.317, Time=0.02 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.22 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3736.589, Time=0.04 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3598.951, Time=0.05 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=0.53 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=0.52 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3600.951, Time=0.12 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 1.791 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1795.475
Date:                Sun, 12 Dec 2021   AIC                           3598.951
Time:                        13:15:44   BIC                           3617.714
Sample:                             0   HQIC                          3606.157
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1983      0.003   -389.581      0.000      -1.204      -1.192
ar.L2         -0.8973      0.006   -139.732      0.000      -0.910      -0.885
ar.L3         -0.3983      0.006    -68.649      0.000      -0.410      -0.387
sigma2         5.0573      0.023    215.292      0.000       5.011       5.103
===================================================================================
Ljung-Box (L1) (Q):                  14.41   Jarque-Bera (JB):           2460553.80
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                             3.89
Prob(H) (two-sided):                  0.00   Kurtosis:                       273.74
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.04932, saving model to LSTM4.h5
10/10 - 4s - loss: 1.4315 - val_loss: 0.0493 - lr: 0.0010 - 4s/epoch - 380ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.04932
10/10 - 0s - loss: 1.4228 - val_loss: 0.0499 - lr: 0.0010 - 69ms/epoch - 7ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.04932
10/10 - 0s - loss: 1.4140 - val_loss: 0.0505 - lr: 0.0010 - 62ms/epoch - 6ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.04932
10/10 - 0s - loss: 1.4050 - val_loss: 0.0511 - lr: 0.0010 - 66ms/epoch - 7ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.04932
10/10 - 0s - loss: 1.3959 - val_loss: 0.0517 - lr: 0.0010 - 59ms/epoch - 6ms/step
Epoch 6/500

Epoch 00006: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00006: val_loss did not improve from 0.04932
10/10 - 0s - loss: 1.3866 - val_loss: 0.0523 - lr: 0.0010 - 66ms/epoch - 7ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.04932
10/10 - 0s - loss: 1.3800 - val_loss: 0.0524 - lr: 1.0000e-04 - 65ms/epoch - 7ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.04932
10/10 - 0s - loss: 1.3790 - val_loss: 0.0524 - lr: 1.0000e-04 - 60ms/epoch - 6ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.04932
10/10 - 0s - loss: 1.3781 - val_loss: 0.0525 - lr: 1.0000e-04 - 63ms/epoch - 6ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.04932
10/10 - 0s - loss: 1.3771 - val_loss: 0.0525 - lr: 1.0000e-04 - 70ms/epoch - 7ms/step
Epoch 11/500

Epoch 00011: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00011: val_loss did not improve from 0.04932
10/10 - 0s - loss: 1.3762 - val_loss: 0.0526 - lr: 1.0000e-04 - 64ms/epoch - 6ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.04932
10/10 - 0s - loss: 1.3755 - val_loss: 0.0526 - lr: 1.0000e-05 - 65ms/epoch - 6ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.04932
10/10 - 0s - loss: 1.3754 - val_loss: 0.0526 - lr: 1.0000e-05 - 58ms/epoch - 6ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.04932
10/10 - 0s - loss: 1.3753 - val_loss: 0.0526 - lr: 1.0000e-05 - 65ms/epoch - 7ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.04932
10/10 - 0s - loss: 1.3752 - val_loss: 0.0526 - lr: 1.0000e-05 - 65ms/epoch - 7ms/step
Epoch 16/500

Epoch 00016: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00016: val_loss did not improve from 0.04932
10/10 - 0s - loss: 1.3751 - val_loss: 0.0526 - lr: 1.0000e-05 - 69ms/epoch - 7ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.04932
10/10 - 0s - loss: 1.3750 - val_loss: 0.0526 - lr: 1.0000e-05 - 59ms/epoch - 6ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.04932
10/10 - 0s - loss: 1.3749 - val_loss: 0.0526 - lr: 1.0000e-05 - 66ms/epoch - 7ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.04932
10/10 - 0s - loss: 1.3749 - val_loss: 0.0526 - lr: 1.0000e-05 - 63ms/epoch - 6ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.04932
10/10 - 0s - loss: 1.3748 - val_loss: 0.0526 - lr: 1.0000e-05 - 58ms/epoch - 6ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.04932
10/10 - 0s - loss: 1.3747 - val_loss: 0.0526 - lr: 1.0000e-05 - 60ms/epoch - 6ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.04932
10/10 - 0s - loss: 1.3746 - val_loss: 0.0526 - lr: 1.0000e-05 - 63ms/epoch - 6ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.04932
10/10 - 0s - loss: 1.3745 - val_loss: 0.0526 - lr: 1.0000e-05 - 65ms/epoch - 7ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.04932
10/10 - 0s - loss: 1.3744 - val_loss: 0.0526 - lr: 1.0000e-05 - 67ms/epoch - 7ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.04932
10/10 - 0s - loss: 1.3743 - val_loss: 0.0527 - lr: 1.0000e-05 - 68ms/epoch - 7ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.04932
10/10 - 0s - loss: 1.3742 - val_loss: 0.0527 - lr: 1.0000e-05 - 66ms/epoch - 7ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.04932
10/10 - 0s - loss: 1.3741 - val_loss: 0.0527 - lr: 1.0000e-05 - 66ms/epoch - 7ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.04932
10/10 - 0s - loss: 1.3740 - val_loss: 0.0527 - lr: 1.0000e-05 - 59ms/epoch - 6ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.04932
10/10 - 0s - loss: 1.3739 - val_loss: 0.0527 - lr: 1.0000e-05 - 68ms/epoch - 7ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.04932
10/10 - 0s - loss: 1.3738 - val_loss: 0.0527 - lr: 1.0000e-05 - 65ms/epoch - 7ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.04932
10/10 - 0s - loss: 1.3737 - val_loss: 0.0527 - lr: 1.0000e-05 - 72ms/epoch - 7ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.04932
10/10 - 0s - loss: 1.3736 - val_loss: 0.0527 - lr: 1.0000e-05 - 61ms/epoch - 6ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.04932
10/10 - 0s - loss: 1.3735 - val_loss: 0.0527 - lr: 1.0000e-05 - 59ms/epoch - 6ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.04932
10/10 - 0s - loss: 1.3734 - val_loss: 0.0527 - lr: 1.0000e-05 - 59ms/epoch - 6ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.04932
10/10 - 0s - loss: 1.3734 - val_loss: 0.0527 - lr: 1.0000e-05 - 60ms/epoch - 6ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.04932
10/10 - 0s - loss: 1.3733 - val_loss: 0.0527 - lr: 1.0000e-05 - 62ms/epoch - 6ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.04932
10/10 - 0s - loss: 1.3732 - val_loss: 0.0527 - lr: 1.0000e-05 - 71ms/epoch - 7ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.04932
10/10 - 0s - loss: 1.3731 - val_loss: 0.0527 - lr: 1.0000e-05 - 64ms/epoch - 6ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.04932
10/10 - 0s - loss: 1.3730 - val_loss: 0.0527 - lr: 1.0000e-05 - 67ms/epoch - 7ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.04932
10/10 - 0s - loss: 1.3729 - val_loss: 0.0527 - lr: 1.0000e-05 - 62ms/epoch - 6ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.04932
10/10 - 0s - loss: 1.3728 - val_loss: 0.0527 - lr: 1.0000e-05 - 61ms/epoch - 6ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.04932
10/10 - 0s - loss: 1.3727 - val_loss: 0.0527 - lr: 1.0000e-05 - 68ms/epoch - 7ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.04932
10/10 - 0s - loss: 1.3726 - val_loss: 0.0527 - lr: 1.0000e-05 - 63ms/epoch - 6ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.04932
10/10 - 0s - loss: 1.3725 - val_loss: 0.0527 - lr: 1.0000e-05 - 58ms/epoch - 6ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.04932
10/10 - 0s - loss: 1.3724 - val_loss: 0.0528 - lr: 1.0000e-05 - 57ms/epoch - 6ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.04932
10/10 - 0s - loss: 1.3723 - val_loss: 0.0528 - lr: 1.0000e-05 - 60ms/epoch - 6ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.04932
10/10 - 0s - loss: 1.3722 - val_loss: 0.0528 - lr: 1.0000e-05 - 64ms/epoch - 6ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.04932
10/10 - 0s - loss: 1.3721 - val_loss: 0.0528 - lr: 1.0000e-05 - 60ms/epoch - 6ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.04932
10/10 - 0s - loss: 1.3720 - val_loss: 0.0528 - lr: 1.0000e-05 - 60ms/epoch - 6ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.04932
10/10 - 0s - loss: 1.3719 - val_loss: 0.0528 - lr: 1.0000e-05 - 58ms/epoch - 6ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.04932
10/10 - 0s - loss: 1.3718 - val_loss: 0.0528 - lr: 1.0000e-05 - 59ms/epoch - 6ms/step
Epoch 00051: early stopping
SMA
Prediction vs Close:		55.22% Accuracy
Prediction vs Prediction:	50.37% Accuracy
MSE:	 23.31830336349202 
RMSE:	 4.828902915103183 
MAPE:	 3.806885992834059

EMA
Prediction vs Close:		55.97% Accuracy
Prediction vs Prediction:	47.76% Accuracy
MSE:	 31.4391560756623 
RMSE:	 5.607063052584865 
MAPE:	 4.398444723456604

WMA
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	46.64% Accuracy
MSE:	 48.85272948439791 
RMSE:	 6.989472761546318 
MAPE:	 5.616901258925532

DEMA
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	46.27% Accuracy
MSE:	 143.4471215002686 
RMSE:	 11.976941241413376 
MAPE:	 10.686872819228396
KAMA
KAMA([input_arrays], [timeperiod=30])

Kaufman Adaptive Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
18

Working on KAMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.25 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4190.464, Time=0.02 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3724.371, Time=0.02 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.15 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3494.154, Time=0.04 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3357.435, Time=0.09 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=0.71 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=0.42 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3359.435, Time=0.12 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 1.825 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1674.717
Date:                Sun, 12 Dec 2021   AIC                           3357.435
Time:                        13:16:44   BIC                           3376.198
Sample:                             0   HQIC                          3364.641
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1955      0.003   -381.246      0.000      -1.202      -1.189
ar.L2         -0.8964      0.007   -135.835      0.000      -0.909      -0.883
ar.L3         -0.3971      0.006    -67.229      0.000      -0.409      -0.385
sigma2         3.7466      0.018    211.623      0.000       3.712       3.781
===================================================================================
Ljung-Box (L1) (Q):                  14.20   Jarque-Bera (JB):           2338363.32
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.01   Skew:                             3.76
Prob(H) (two-sided):                  0.00   Kurtosis:                       266.93
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.04951, saving model to LSTM4.h5
45/45 - 4s - loss: 1.3909 - val_loss: 0.0495 - lr: 0.0010 - 4s/epoch - 84ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.04951
45/45 - 0s - loss: 1.2737 - val_loss: 0.0516 - lr: 0.0010 - 226ms/epoch - 5ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.04951
45/45 - 0s - loss: 1.1363 - val_loss: 0.0550 - lr: 0.0010 - 231ms/epoch - 5ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.04951
45/45 - 0s - loss: 1.0490 - val_loss: 0.0585 - lr: 0.0010 - 209ms/epoch - 5ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.04951
45/45 - 0s - loss: 0.9940 - val_loss: 0.0621 - lr: 0.0010 - 205ms/epoch - 5ms/step
Epoch 6/500

Epoch 00006: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00006: val_loss did not improve from 0.04951
45/45 - 0s - loss: 0.9525 - val_loss: 0.0658 - lr: 0.0010 - 222ms/epoch - 5ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.04951
45/45 - 0s - loss: 0.9304 - val_loss: 0.0662 - lr: 1.0000e-04 - 228ms/epoch - 5ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.04951
45/45 - 0s - loss: 0.9271 - val_loss: 0.0666 - lr: 1.0000e-04 - 220ms/epoch - 5ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.04951
45/45 - 0s - loss: 0.9238 - val_loss: 0.0670 - lr: 1.0000e-04 - 220ms/epoch - 5ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.04951
45/45 - 0s - loss: 0.9205 - val_loss: 0.0674 - lr: 1.0000e-04 - 205ms/epoch - 5ms/step
Epoch 11/500

Epoch 00011: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00011: val_loss did not improve from 0.04951
45/45 - 0s - loss: 0.9172 - val_loss: 0.0678 - lr: 1.0000e-04 - 230ms/epoch - 5ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.04951
45/45 - 0s - loss: 0.9152 - val_loss: 0.0678 - lr: 1.0000e-05 - 228ms/epoch - 5ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.04951
45/45 - 0s - loss: 0.9149 - val_loss: 0.0679 - lr: 1.0000e-05 - 219ms/epoch - 5ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.04951
45/45 - 0s - loss: 0.9146 - val_loss: 0.0679 - lr: 1.0000e-05 - 209ms/epoch - 5ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.04951
45/45 - 0s - loss: 0.9142 - val_loss: 0.0680 - lr: 1.0000e-05 - 203ms/epoch - 5ms/step
Epoch 16/500

Epoch 00016: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00016: val_loss did not improve from 0.04951
45/45 - 0s - loss: 0.9139 - val_loss: 0.0680 - lr: 1.0000e-05 - 238ms/epoch - 5ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.04951
45/45 - 0s - loss: 0.9136 - val_loss: 0.0681 - lr: 1.0000e-05 - 213ms/epoch - 5ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.04951
45/45 - 0s - loss: 0.9132 - val_loss: 0.0681 - lr: 1.0000e-05 - 225ms/epoch - 5ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.04951
45/45 - 0s - loss: 0.9129 - val_loss: 0.0682 - lr: 1.0000e-05 - 214ms/epoch - 5ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.04951
45/45 - 0s - loss: 0.9126 - val_loss: 0.0682 - lr: 1.0000e-05 - 230ms/epoch - 5ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.04951
45/45 - 0s - loss: 0.9122 - val_loss: 0.0683 - lr: 1.0000e-05 - 219ms/epoch - 5ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.04951
45/45 - 0s - loss: 0.9119 - val_loss: 0.0683 - lr: 1.0000e-05 - 203ms/epoch - 5ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.04951
45/45 - 0s - loss: 0.9115 - val_loss: 0.0684 - lr: 1.0000e-05 - 224ms/epoch - 5ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.04951
45/45 - 0s - loss: 0.9112 - val_loss: 0.0684 - lr: 1.0000e-05 - 207ms/epoch - 5ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.04951
45/45 - 0s - loss: 0.9108 - val_loss: 0.0685 - lr: 1.0000e-05 - 235ms/epoch - 5ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.04951
45/45 - 0s - loss: 0.9105 - val_loss: 0.0685 - lr: 1.0000e-05 - 220ms/epoch - 5ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.04951
45/45 - 0s - loss: 0.9101 - val_loss: 0.0686 - lr: 1.0000e-05 - 209ms/epoch - 5ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.04951
45/45 - 0s - loss: 0.9098 - val_loss: 0.0686 - lr: 1.0000e-05 - 198ms/epoch - 4ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.04951
45/45 - 0s - loss: 0.9094 - val_loss: 0.0687 - lr: 1.0000e-05 - 210ms/epoch - 5ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.04951
45/45 - 0s - loss: 0.9091 - val_loss: 0.0687 - lr: 1.0000e-05 - 223ms/epoch - 5ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.04951
45/45 - 0s - loss: 0.9087 - val_loss: 0.0688 - lr: 1.0000e-05 - 218ms/epoch - 5ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.04951
45/45 - 0s - loss: 0.9084 - val_loss: 0.0689 - lr: 1.0000e-05 - 204ms/epoch - 5ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.04951
45/45 - 0s - loss: 0.9080 - val_loss: 0.0689 - lr: 1.0000e-05 - 207ms/epoch - 5ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.04951
45/45 - 0s - loss: 0.9077 - val_loss: 0.0690 - lr: 1.0000e-05 - 221ms/epoch - 5ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.04951
45/45 - 0s - loss: 0.9073 - val_loss: 0.0690 - lr: 1.0000e-05 - 216ms/epoch - 5ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.04951
45/45 - 0s - loss: 0.9070 - val_loss: 0.0691 - lr: 1.0000e-05 - 206ms/epoch - 5ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.04951
45/45 - 0s - loss: 0.9066 - val_loss: 0.0692 - lr: 1.0000e-05 - 204ms/epoch - 5ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.04951
45/45 - 0s - loss: 0.9063 - val_loss: 0.0692 - lr: 1.0000e-05 - 221ms/epoch - 5ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.04951
45/45 - 0s - loss: 0.9059 - val_loss: 0.0693 - lr: 1.0000e-05 - 236ms/epoch - 5ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.04951
45/45 - 0s - loss: 0.9056 - val_loss: 0.0693 - lr: 1.0000e-05 - 200ms/epoch - 4ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.04951
45/45 - 0s - loss: 0.9052 - val_loss: 0.0694 - lr: 1.0000e-05 - 209ms/epoch - 5ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.04951
45/45 - 0s - loss: 0.9048 - val_loss: 0.0695 - lr: 1.0000e-05 - 209ms/epoch - 5ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.04951
45/45 - 0s - loss: 0.9045 - val_loss: 0.0695 - lr: 1.0000e-05 - 208ms/epoch - 5ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.04951
45/45 - 0s - loss: 0.9041 - val_loss: 0.0696 - lr: 1.0000e-05 - 221ms/epoch - 5ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.04951
45/45 - 0s - loss: 0.9038 - val_loss: 0.0696 - lr: 1.0000e-05 - 212ms/epoch - 5ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.04951
45/45 - 0s - loss: 0.9034 - val_loss: 0.0697 - lr: 1.0000e-05 - 222ms/epoch - 5ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.04951
45/45 - 0s - loss: 0.9031 - val_loss: 0.0698 - lr: 1.0000e-05 - 207ms/epoch - 5ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.04951
45/45 - 0s - loss: 0.9027 - val_loss: 0.0698 - lr: 1.0000e-05 - 219ms/epoch - 5ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.04951
45/45 - 0s - loss: 0.9024 - val_loss: 0.0699 - lr: 1.0000e-05 - 212ms/epoch - 5ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.04951
45/45 - 0s - loss: 0.9020 - val_loss: 0.0700 - lr: 1.0000e-05 - 209ms/epoch - 5ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.04951
45/45 - 0s - loss: 0.9017 - val_loss: 0.0700 - lr: 1.0000e-05 - 207ms/epoch - 5ms/step
Epoch 00051: early stopping
SMA
Prediction vs Close:		55.22% Accuracy
Prediction vs Prediction:	50.37% Accuracy
MSE:	 23.31830336349202 
RMSE:	 4.828902915103183 
MAPE:	 3.806885992834059

EMA
Prediction vs Close:		55.97% Accuracy
Prediction vs Prediction:	47.76% Accuracy
MSE:	 31.4391560756623 
RMSE:	 5.607063052584865 
MAPE:	 4.398444723456604

WMA
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	46.64% Accuracy
MSE:	 48.85272948439791 
RMSE:	 6.989472761546318 
MAPE:	 5.616901258925532

DEMA
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	46.27% Accuracy
MSE:	 143.4471215002686 
RMSE:	 11.976941241413376 
MAPE:	 10.686872819228396

KAMA
Prediction vs Close:		54.48% Accuracy
Prediction vs Prediction:	49.63% Accuracy
MSE:	 23.251670970583447 
RMSE:	 4.821998648961181 
MAPE:	 3.833042253232743
MIDPOINT
MIDPOINT([input_arrays], [timeperiod=14])

MidPoint over period (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 14
Outputs:
    real
14

Working on MIDPOINT predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.26 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4212.289, Time=0.02 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3747.746, Time=0.03 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.14 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3523.401, Time=0.04 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3387.759, Time=0.05 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=0.69 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=0.50 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3389.758, Time=0.21 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 1.941 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1689.879
Date:                Sun, 12 Dec 2021   AIC                           3387.759
Time:                        13:18:02   BIC                           3406.522
Sample:                             0   HQIC                          3394.964
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1878      0.003   -345.315      0.000      -1.195      -1.181
ar.L2         -0.8876      0.007   -121.809      0.000      -0.902      -0.873
ar.L3         -0.3957      0.007    -60.127      0.000      -0.409      -0.383
sigma2         3.8904      0.020    193.404      0.000       3.851       3.930
===================================================================================
Ljung-Box (L1) (Q):                  13.21   Jarque-Bera (JB):           1659080.01
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.08   Skew:                             3.28
Prob(H) (two-sided):                  0.00   Kurtosis:                       225.31
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.06096, saving model to LSTM4.h5
58/58 - 4s - loss: 1.4939 - val_loss: 0.0610 - lr: 0.0010 - 4s/epoch - 62ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.06096
58/58 - 0s - loss: 1.2750 - val_loss: 0.0712 - lr: 0.0010 - 298ms/epoch - 5ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.06096
58/58 - 0s - loss: 1.0964 - val_loss: 0.0798 - lr: 0.0010 - 271ms/epoch - 5ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.06096
58/58 - 0s - loss: 0.9939 - val_loss: 0.0866 - lr: 0.0010 - 272ms/epoch - 5ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.06096
58/58 - 0s - loss: 0.9161 - val_loss: 0.0916 - lr: 0.0010 - 274ms/epoch - 5ms/step
Epoch 6/500

Epoch 00006: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00006: val_loss did not improve from 0.06096
58/58 - 0s - loss: 0.8398 - val_loss: 0.0966 - lr: 0.0010 - 269ms/epoch - 5ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.06096
58/58 - 0s - loss: 0.7994 - val_loss: 0.0971 - lr: 1.0000e-04 - 258ms/epoch - 4ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.06096
58/58 - 0s - loss: 0.7951 - val_loss: 0.0976 - lr: 1.0000e-04 - 272ms/epoch - 5ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.06096
58/58 - 0s - loss: 0.7910 - val_loss: 0.0982 - lr: 1.0000e-04 - 275ms/epoch - 5ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.06096
58/58 - 0s - loss: 0.7871 - val_loss: 0.0988 - lr: 1.0000e-04 - 279ms/epoch - 5ms/step
Epoch 11/500

Epoch 00011: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00011: val_loss did not improve from 0.06096
58/58 - 0s - loss: 0.7834 - val_loss: 0.0994 - lr: 1.0000e-04 - 259ms/epoch - 4ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.06096
58/58 - 0s - loss: 0.7811 - val_loss: 0.0995 - lr: 1.0000e-05 - 273ms/epoch - 5ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.06096
58/58 - 0s - loss: 0.7807 - val_loss: 0.0995 - lr: 1.0000e-05 - 286ms/epoch - 5ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.06096
58/58 - 0s - loss: 0.7804 - val_loss: 0.0996 - lr: 1.0000e-05 - 274ms/epoch - 5ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.06096
58/58 - 0s - loss: 0.7800 - val_loss: 0.0997 - lr: 1.0000e-05 - 269ms/epoch - 5ms/step
Epoch 16/500

Epoch 00016: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00016: val_loss did not improve from 0.06096
58/58 - 0s - loss: 0.7796 - val_loss: 0.0997 - lr: 1.0000e-05 - 271ms/epoch - 5ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.06096
58/58 - 0s - loss: 0.7793 - val_loss: 0.0998 - lr: 1.0000e-05 - 285ms/epoch - 5ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.06096
58/58 - 0s - loss: 0.7789 - val_loss: 0.0999 - lr: 1.0000e-05 - 266ms/epoch - 5ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.06096
58/58 - 0s - loss: 0.7785 - val_loss: 0.0999 - lr: 1.0000e-05 - 256ms/epoch - 4ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.06096
58/58 - 0s - loss: 0.7781 - val_loss: 0.1000 - lr: 1.0000e-05 - 301ms/epoch - 5ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.06096
58/58 - 0s - loss: 0.7777 - val_loss: 0.1001 - lr: 1.0000e-05 - 283ms/epoch - 5ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.06096
58/58 - 0s - loss: 0.7773 - val_loss: 0.1002 - lr: 1.0000e-05 - 265ms/epoch - 5ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.06096
58/58 - 0s - loss: 0.7770 - val_loss: 0.1002 - lr: 1.0000e-05 - 271ms/epoch - 5ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.06096
58/58 - 0s - loss: 0.7766 - val_loss: 0.1003 - lr: 1.0000e-05 - 285ms/epoch - 5ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.06096
58/58 - 0s - loss: 0.7762 - val_loss: 0.1004 - lr: 1.0000e-05 - 263ms/epoch - 5ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.06096
58/58 - 0s - loss: 0.7758 - val_loss: 0.1005 - lr: 1.0000e-05 - 260ms/epoch - 4ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.06096
58/58 - 0s - loss: 0.7754 - val_loss: 0.1006 - lr: 1.0000e-05 - 270ms/epoch - 5ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.06096
58/58 - 0s - loss: 0.7750 - val_loss: 0.1006 - lr: 1.0000e-05 - 271ms/epoch - 5ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.06096
58/58 - 0s - loss: 0.7746 - val_loss: 0.1007 - lr: 1.0000e-05 - 260ms/epoch - 4ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.06096
58/58 - 0s - loss: 0.7742 - val_loss: 0.1008 - lr: 1.0000e-05 - 265ms/epoch - 5ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.06096
58/58 - 0s - loss: 0.7738 - val_loss: 0.1009 - lr: 1.0000e-05 - 291ms/epoch - 5ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.06096
58/58 - 0s - loss: 0.7734 - val_loss: 0.1010 - lr: 1.0000e-05 - 267ms/epoch - 5ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.06096
58/58 - 0s - loss: 0.7730 - val_loss: 0.1010 - lr: 1.0000e-05 - 278ms/epoch - 5ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.06096
58/58 - 0s - loss: 0.7726 - val_loss: 0.1011 - lr: 1.0000e-05 - 267ms/epoch - 5ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.06096
58/58 - 0s - loss: 0.7722 - val_loss: 0.1012 - lr: 1.0000e-05 - 271ms/epoch - 5ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.06096
58/58 - 0s - loss: 0.7718 - val_loss: 0.1013 - lr: 1.0000e-05 - 265ms/epoch - 5ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.06096
58/58 - 0s - loss: 0.7714 - val_loss: 0.1014 - lr: 1.0000e-05 - 271ms/epoch - 5ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.06096
58/58 - 0s - loss: 0.7710 - val_loss: 0.1014 - lr: 1.0000e-05 - 258ms/epoch - 4ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.06096
58/58 - 0s - loss: 0.7706 - val_loss: 0.1015 - lr: 1.0000e-05 - 270ms/epoch - 5ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.06096
58/58 - 0s - loss: 0.7702 - val_loss: 0.1016 - lr: 1.0000e-05 - 256ms/epoch - 4ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.06096
58/58 - 0s - loss: 0.7698 - val_loss: 0.1017 - lr: 1.0000e-05 - 271ms/epoch - 5ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.06096
58/58 - 0s - loss: 0.7695 - val_loss: 0.1018 - lr: 1.0000e-05 - 275ms/epoch - 5ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.06096
58/58 - 0s - loss: 0.7691 - val_loss: 0.1019 - lr: 1.0000e-05 - 288ms/epoch - 5ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.06096
58/58 - 0s - loss: 0.7687 - val_loss: 0.1020 - lr: 1.0000e-05 - 269ms/epoch - 5ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.06096
58/58 - 0s - loss: 0.7683 - val_loss: 0.1020 - lr: 1.0000e-05 - 274ms/epoch - 5ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.06096
58/58 - 0s - loss: 0.7679 - val_loss: 0.1021 - lr: 1.0000e-05 - 290ms/epoch - 5ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.06096
58/58 - 0s - loss: 0.7675 - val_loss: 0.1022 - lr: 1.0000e-05 - 275ms/epoch - 5ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.06096
58/58 - 0s - loss: 0.7671 - val_loss: 0.1023 - lr: 1.0000e-05 - 269ms/epoch - 5ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.06096
58/58 - 0s - loss: 0.7667 - val_loss: 0.1024 - lr: 1.0000e-05 - 275ms/epoch - 5ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.06096
58/58 - 0s - loss: 0.7663 - val_loss: 0.1025 - lr: 1.0000e-05 - 286ms/epoch - 5ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.06096
58/58 - 0s - loss: 0.7659 - val_loss: 0.1026 - lr: 1.0000e-05 - 270ms/epoch - 5ms/step
Epoch 00051: early stopping
SMA
Prediction vs Close:		55.22% Accuracy
Prediction vs Prediction:	50.37% Accuracy
MSE:	 23.31830336349202 
RMSE:	 4.828902915103183 
MAPE:	 3.806885992834059

EMA
Prediction vs Close:		55.97% Accuracy
Prediction vs Prediction:	47.76% Accuracy
MSE:	 31.4391560756623 
RMSE:	 5.607063052584865 
MAPE:	 4.398444723456604

WMA
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	46.64% Accuracy
MSE:	 48.85272948439791 
RMSE:	 6.989472761546318 
MAPE:	 5.616901258925532

DEMA
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	46.27% Accuracy
MSE:	 143.4471215002686 
RMSE:	 11.976941241413376 
MAPE:	 10.686872819228396

KAMA
Prediction vs Close:		54.48% Accuracy
Prediction vs Prediction:	49.63% Accuracy
MSE:	 23.251670970583447 
RMSE:	 4.821998648961181 
MAPE:	 3.833042253232743

MIDPOINT
Prediction vs Close:		51.12% Accuracy
Prediction vs Prediction:	46.27% Accuracy
MSE:	 16.39872837560197 
RMSE:	 4.0495343405880595 
MAPE:	 3.299619771312048
T3
T3([input_arrays], [timeperiod=5], [vfactor=0.7])

Triple Exponential Moving Average (T3) (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 5
    vfactor: 0.7
Outputs:
    real
19

Working on T3 predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.26 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4414.515, Time=0.02 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3944.062, Time=0.02 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.20 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3715.173, Time=0.04 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3577.471, Time=0.05 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=0.82 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=0.33 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3579.471, Time=0.11 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 1.849 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1784.736
Date:                Sun, 12 Dec 2021   AIC                           3577.471
Time:                        13:19:29   BIC                           3596.235
Sample:                             0   HQIC                          3584.677
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1982      0.003   -389.844      0.000      -1.204      -1.192
ar.L2         -0.8974      0.006   -139.861      0.000      -0.910      -0.885
ar.L3         -0.3983      0.006    -68.862      0.000      -0.410      -0.387
sigma2         4.9242      0.023    215.469      0.000       4.879       4.969
===================================================================================
Ljung-Box (L1) (Q):                  14.55   Jarque-Bera (JB):           2468024.38
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                             3.90
Prob(H) (two-sided):                  0.00   Kurtosis:                       274.15
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.04986, saving model to LSTM4.h5
43/43 - 4s - loss: 1.3764 - val_loss: 0.0499 - lr: 0.0010 - 4s/epoch - 92ms/step
Epoch 2/500

Epoch 00002: val_loss improved from 0.04986 to 0.04710, saving model to LSTM4.h5
43/43 - 0s - loss: 1.2711 - val_loss: 0.0471 - lr: 0.0010 - 216ms/epoch - 5ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.04710
43/43 - 0s - loss: 1.1320 - val_loss: 0.0473 - lr: 0.0010 - 209ms/epoch - 5ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.04710
43/43 - 0s - loss: 0.9343 - val_loss: 0.0507 - lr: 0.0010 - 210ms/epoch - 5ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.04710
43/43 - 0s - loss: 0.8210 - val_loss: 0.0539 - lr: 0.0010 - 208ms/epoch - 5ms/step
Epoch 6/500

Epoch 00006: val_loss did not improve from 0.04710
43/43 - 0s - loss: 0.7722 - val_loss: 0.0568 - lr: 0.0010 - 216ms/epoch - 5ms/step
Epoch 7/500

Epoch 00007: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00007: val_loss did not improve from 0.04710
43/43 - 0s - loss: 0.7424 - val_loss: 0.0597 - lr: 0.0010 - 194ms/epoch - 5ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.04710
43/43 - 0s - loss: 0.7280 - val_loss: 0.0600 - lr: 1.0000e-04 - 196ms/epoch - 5ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.04710
43/43 - 0s - loss: 0.7260 - val_loss: 0.0603 - lr: 1.0000e-04 - 203ms/epoch - 5ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.04710
43/43 - 0s - loss: 0.7239 - val_loss: 0.0607 - lr: 1.0000e-04 - 210ms/epoch - 5ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.04710
43/43 - 0s - loss: 0.7219 - val_loss: 0.0610 - lr: 1.0000e-04 - 198ms/epoch - 5ms/step
Epoch 12/500

Epoch 00012: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00012: val_loss did not improve from 0.04710
43/43 - 0s - loss: 0.7198 - val_loss: 0.0614 - lr: 1.0000e-04 - 202ms/epoch - 5ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.04710
43/43 - 0s - loss: 0.7186 - val_loss: 0.0614 - lr: 1.0000e-05 - 201ms/epoch - 5ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.04710
43/43 - 0s - loss: 0.7184 - val_loss: 0.0615 - lr: 1.0000e-05 - 194ms/epoch - 5ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.04710
43/43 - 0s - loss: 0.7182 - val_loss: 0.0615 - lr: 1.0000e-05 - 199ms/epoch - 5ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.04710
43/43 - 0s - loss: 0.7180 - val_loss: 0.0615 - lr: 1.0000e-05 - 206ms/epoch - 5ms/step
Epoch 17/500

Epoch 00017: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00017: val_loss did not improve from 0.04710
43/43 - 0s - loss: 0.7177 - val_loss: 0.0616 - lr: 1.0000e-05 - 199ms/epoch - 5ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.04710
43/43 - 0s - loss: 0.7175 - val_loss: 0.0616 - lr: 1.0000e-05 - 214ms/epoch - 5ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.04710
43/43 - 0s - loss: 0.7173 - val_loss: 0.0617 - lr: 1.0000e-05 - 220ms/epoch - 5ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.04710
43/43 - 0s - loss: 0.7171 - val_loss: 0.0617 - lr: 1.0000e-05 - 210ms/epoch - 5ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.04710
43/43 - 0s - loss: 0.7169 - val_loss: 0.0618 - lr: 1.0000e-05 - 194ms/epoch - 5ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.04710
43/43 - 0s - loss: 0.7166 - val_loss: 0.0618 - lr: 1.0000e-05 - 213ms/epoch - 5ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.04710
43/43 - 0s - loss: 0.7164 - val_loss: 0.0619 - lr: 1.0000e-05 - 210ms/epoch - 5ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.04710
43/43 - 0s - loss: 0.7162 - val_loss: 0.0619 - lr: 1.0000e-05 - 205ms/epoch - 5ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.04710
43/43 - 0s - loss: 0.7159 - val_loss: 0.0620 - lr: 1.0000e-05 - 214ms/epoch - 5ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.04710
43/43 - 0s - loss: 0.7157 - val_loss: 0.0620 - lr: 1.0000e-05 - 210ms/epoch - 5ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.04710
43/43 - 0s - loss: 0.7155 - val_loss: 0.0621 - lr: 1.0000e-05 - 207ms/epoch - 5ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.04710
43/43 - 0s - loss: 0.7152 - val_loss: 0.0622 - lr: 1.0000e-05 - 213ms/epoch - 5ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.04710
43/43 - 0s - loss: 0.7150 - val_loss: 0.0622 - lr: 1.0000e-05 - 218ms/epoch - 5ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.04710
43/43 - 0s - loss: 0.7147 - val_loss: 0.0623 - lr: 1.0000e-05 - 208ms/epoch - 5ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.04710
43/43 - 0s - loss: 0.7145 - val_loss: 0.0623 - lr: 1.0000e-05 - 195ms/epoch - 5ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.04710
43/43 - 0s - loss: 0.7143 - val_loss: 0.0624 - lr: 1.0000e-05 - 199ms/epoch - 5ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.04710
43/43 - 0s - loss: 0.7140 - val_loss: 0.0625 - lr: 1.0000e-05 - 207ms/epoch - 5ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.04710
43/43 - 0s - loss: 0.7138 - val_loss: 0.0625 - lr: 1.0000e-05 - 200ms/epoch - 5ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.04710
43/43 - 0s - loss: 0.7135 - val_loss: 0.0626 - lr: 1.0000e-05 - 208ms/epoch - 5ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.04710
43/43 - 0s - loss: 0.7133 - val_loss: 0.0627 - lr: 1.0000e-05 - 191ms/epoch - 4ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.04710
43/43 - 0s - loss: 0.7130 - val_loss: 0.0627 - lr: 1.0000e-05 - 212ms/epoch - 5ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.04710
43/43 - 0s - loss: 0.7128 - val_loss: 0.0628 - lr: 1.0000e-05 - 209ms/epoch - 5ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.04710
43/43 - 0s - loss: 0.7125 - val_loss: 0.0629 - lr: 1.0000e-05 - 202ms/epoch - 5ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.04710
43/43 - 0s - loss: 0.7123 - val_loss: 0.0629 - lr: 1.0000e-05 - 212ms/epoch - 5ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.04710
43/43 - 0s - loss: 0.7120 - val_loss: 0.0630 - lr: 1.0000e-05 - 209ms/epoch - 5ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.04710
43/43 - 0s - loss: 0.7118 - val_loss: 0.0631 - lr: 1.0000e-05 - 206ms/epoch - 5ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.04710
43/43 - 0s - loss: 0.7115 - val_loss: 0.0632 - lr: 1.0000e-05 - 204ms/epoch - 5ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.04710
43/43 - 0s - loss: 0.7113 - val_loss: 0.0632 - lr: 1.0000e-05 - 206ms/epoch - 5ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.04710
43/43 - 0s - loss: 0.7110 - val_loss: 0.0633 - lr: 1.0000e-05 - 202ms/epoch - 5ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.04710
43/43 - 0s - loss: 0.7108 - val_loss: 0.0634 - lr: 1.0000e-05 - 195ms/epoch - 5ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.04710
43/43 - 0s - loss: 0.7105 - val_loss: 0.0635 - lr: 1.0000e-05 - 205ms/epoch - 5ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.04710
43/43 - 0s - loss: 0.7103 - val_loss: 0.0636 - lr: 1.0000e-05 - 207ms/epoch - 5ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.04710
43/43 - 0s - loss: 0.7100 - val_loss: 0.0636 - lr: 1.0000e-05 - 207ms/epoch - 5ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.04710
43/43 - 0s - loss: 0.7097 - val_loss: 0.0637 - lr: 1.0000e-05 - 213ms/epoch - 5ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.04710
43/43 - 0s - loss: 0.7095 - val_loss: 0.0638 - lr: 1.0000e-05 - 211ms/epoch - 5ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.04710
43/43 - 0s - loss: 0.7092 - val_loss: 0.0639 - lr: 1.0000e-05 - 203ms/epoch - 5ms/step
Epoch 00052: early stopping
SMA
Prediction vs Close:		55.22% Accuracy
Prediction vs Prediction:	50.37% Accuracy
MSE:	 23.31830336349202 
RMSE:	 4.828902915103183 
MAPE:	 3.806885992834059

EMA
Prediction vs Close:		55.97% Accuracy
Prediction vs Prediction:	47.76% Accuracy
MSE:	 31.4391560756623 
RMSE:	 5.607063052584865 
MAPE:	 4.398444723456604

WMA
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	46.64% Accuracy
MSE:	 48.85272948439791 
RMSE:	 6.989472761546318 
MAPE:	 5.616901258925532

DEMA
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	46.27% Accuracy
MSE:	 143.4471215002686 
RMSE:	 11.976941241413376 
MAPE:	 10.686872819228396

KAMA
Prediction vs Close:		54.48% Accuracy
Prediction vs Prediction:	49.63% Accuracy
MSE:	 23.251670970583447 
RMSE:	 4.821998648961181 
MAPE:	 3.833042253232743

MIDPOINT
Prediction vs Close:		51.12% Accuracy
Prediction vs Prediction:	46.27% Accuracy
MSE:	 16.39872837560197 
RMSE:	 4.0495343405880595 
MAPE:	 3.299619771312048

T3
Prediction vs Close:		53.36% Accuracy
Prediction vs Prediction:	48.13% Accuracy
MSE:	 85.43536380908036 
RMSE:	 9.24312521872772 
MAPE:	 7.5496901284439915
TEMA
TEMA([input_arrays], [timeperiod=30])

Triple Exponential Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
9

Working on TEMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=inf, Time=0.31 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=4352.703, Time=0.02 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=3889.412, Time=0.02 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=inf, Time=0.15 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=3689.930, Time=0.04 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=3574.245, Time=0.07 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=inf, Time=1.05 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=inf, Time=0.47 sec
 ARIMA(3,3,0)(0,0,0)[0] intercept   : AIC=3576.245, Time=0.17 sec

Best model:  ARIMA(3,3,0)(0,0,0)[0]          
Total fit time: 2.305 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 0)   Log Likelihood               -1783.123
Date:                Sun, 12 Dec 2021   AIC                           3574.245
Time:                        13:20:45   BIC                           3593.008
Sample:                             0   HQIC                          3581.451
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1         -1.1480      0.004   -302.430      0.000      -1.155      -1.141
ar.L2         -0.8300      0.008    -99.682      0.000      -0.846      -0.814
ar.L3         -0.3687      0.007    -50.527      0.000      -0.383      -0.354
sigma2         4.9055      0.028    175.970      0.000       4.851       4.960
===================================================================================
Ljung-Box (L1) (Q):                  11.61   Jarque-Bera (JB):           1261976.58
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.16   Skew:                             2.52
Prob(H) (two-sided):                  0.00   Kurtosis:                       196.90
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.05138, saving model to LSTM4.h5
90/90 - 4s - loss: 1.3734 - val_loss: 0.0514 - lr: 0.0010 - 4s/epoch - 44ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.05138
90/90 - 0s - loss: 1.2620 - val_loss: 0.0593 - lr: 0.0010 - 414ms/epoch - 5ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.05138
90/90 - 0s - loss: 1.1366 - val_loss: 0.0715 - lr: 0.0010 - 415ms/epoch - 5ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.05138
90/90 - 0s - loss: 1.0216 - val_loss: 0.0842 - lr: 0.0010 - 408ms/epoch - 5ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.05138
90/90 - 0s - loss: 0.9400 - val_loss: 0.0973 - lr: 0.0010 - 424ms/epoch - 5ms/step
Epoch 6/500

Epoch 00006: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00006: val_loss did not improve from 0.05138
90/90 - 0s - loss: 0.8801 - val_loss: 0.1105 - lr: 0.0010 - 420ms/epoch - 5ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.05138
90/90 - 0s - loss: 0.8499 - val_loss: 0.1118 - lr: 1.0000e-04 - 404ms/epoch - 4ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.05138
90/90 - 0s - loss: 0.8453 - val_loss: 0.1132 - lr: 1.0000e-04 - 404ms/epoch - 4ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.05138
90/90 - 0s - loss: 0.8408 - val_loss: 0.1146 - lr: 1.0000e-04 - 401ms/epoch - 4ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.05138
90/90 - 0s - loss: 0.8363 - val_loss: 0.1161 - lr: 1.0000e-04 - 414ms/epoch - 5ms/step
Epoch 11/500

Epoch 00011: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00011: val_loss did not improve from 0.05138
90/90 - 0s - loss: 0.8318 - val_loss: 0.1176 - lr: 1.0000e-04 - 397ms/epoch - 4ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.05138
90/90 - 0s - loss: 0.8291 - val_loss: 0.1177 - lr: 1.0000e-05 - 383ms/epoch - 4ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.05138
90/90 - 0s - loss: 0.8286 - val_loss: 0.1179 - lr: 1.0000e-05 - 384ms/epoch - 4ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.05138
90/90 - 0s - loss: 0.8282 - val_loss: 0.1180 - lr: 1.0000e-05 - 410ms/epoch - 5ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.05138
90/90 - 0s - loss: 0.8277 - val_loss: 0.1182 - lr: 1.0000e-05 - 410ms/epoch - 5ms/step
Epoch 16/500

Epoch 00016: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00016: val_loss did not improve from 0.05138
90/90 - 0s - loss: 0.8272 - val_loss: 0.1184 - lr: 1.0000e-05 - 396ms/epoch - 4ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.05138
90/90 - 0s - loss: 0.8268 - val_loss: 0.1185 - lr: 1.0000e-05 - 401ms/epoch - 4ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.05138
90/90 - 0s - loss: 0.8263 - val_loss: 0.1187 - lr: 1.0000e-05 - 408ms/epoch - 5ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.05138
90/90 - 0s - loss: 0.8258 - val_loss: 0.1189 - lr: 1.0000e-05 - 397ms/epoch - 4ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.05138
90/90 - 0s - loss: 0.8253 - val_loss: 0.1191 - lr: 1.0000e-05 - 402ms/epoch - 4ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.05138
90/90 - 0s - loss: 0.8249 - val_loss: 0.1193 - lr: 1.0000e-05 - 394ms/epoch - 4ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.05138
90/90 - 0s - loss: 0.8244 - val_loss: 0.1194 - lr: 1.0000e-05 - 391ms/epoch - 4ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.05138
90/90 - 0s - loss: 0.8239 - val_loss: 0.1196 - lr: 1.0000e-05 - 401ms/epoch - 4ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.05138
90/90 - 0s - loss: 0.8234 - val_loss: 0.1198 - lr: 1.0000e-05 - 398ms/epoch - 4ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.05138
90/90 - 0s - loss: 0.8229 - val_loss: 0.1200 - lr: 1.0000e-05 - 415ms/epoch - 5ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.05138
90/90 - 0s - loss: 0.8224 - val_loss: 0.1202 - lr: 1.0000e-05 - 414ms/epoch - 5ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.05138
90/90 - 0s - loss: 0.8219 - val_loss: 0.1204 - lr: 1.0000e-05 - 398ms/epoch - 4ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.05138
90/90 - 0s - loss: 0.8214 - val_loss: 0.1206 - lr: 1.0000e-05 - 407ms/epoch - 5ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.05138
90/90 - 0s - loss: 0.8209 - val_loss: 0.1208 - lr: 1.0000e-05 - 392ms/epoch - 4ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.05138
90/90 - 0s - loss: 0.8204 - val_loss: 0.1210 - lr: 1.0000e-05 - 398ms/epoch - 4ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.05138
90/90 - 0s - loss: 0.8199 - val_loss: 0.1212 - lr: 1.0000e-05 - 392ms/epoch - 4ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.05138
90/90 - 0s - loss: 0.8194 - val_loss: 0.1214 - lr: 1.0000e-05 - 403ms/epoch - 4ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.05138
90/90 - 0s - loss: 0.8189 - val_loss: 0.1216 - lr: 1.0000e-05 - 410ms/epoch - 5ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.05138
90/90 - 0s - loss: 0.8184 - val_loss: 0.1218 - lr: 1.0000e-05 - 399ms/epoch - 4ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.05138
90/90 - 0s - loss: 0.8179 - val_loss: 0.1221 - lr: 1.0000e-05 - 404ms/epoch - 4ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.05138
90/90 - 0s - loss: 0.8174 - val_loss: 0.1223 - lr: 1.0000e-05 - 401ms/epoch - 4ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.05138
90/90 - 0s - loss: 0.8169 - val_loss: 0.1225 - lr: 1.0000e-05 - 406ms/epoch - 5ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.05138
90/90 - 0s - loss: 0.8164 - val_loss: 0.1227 - lr: 1.0000e-05 - 405ms/epoch - 4ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.05138
90/90 - 0s - loss: 0.8159 - val_loss: 0.1229 - lr: 1.0000e-05 - 393ms/epoch - 4ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.05138
90/90 - 0s - loss: 0.8154 - val_loss: 0.1231 - lr: 1.0000e-05 - 410ms/epoch - 5ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.05138
90/90 - 0s - loss: 0.8149 - val_loss: 0.1234 - lr: 1.0000e-05 - 381ms/epoch - 4ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.05138
90/90 - 0s - loss: 0.8144 - val_loss: 0.1236 - lr: 1.0000e-05 - 392ms/epoch - 4ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.05138
90/90 - 0s - loss: 0.8139 - val_loss: 0.1238 - lr: 1.0000e-05 - 393ms/epoch - 4ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.05138
90/90 - 0s - loss: 0.8134 - val_loss: 0.1240 - lr: 1.0000e-05 - 413ms/epoch - 5ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.05138
90/90 - 0s - loss: 0.8129 - val_loss: 0.1242 - lr: 1.0000e-05 - 417ms/epoch - 5ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.05138
90/90 - 0s - loss: 0.8124 - val_loss: 0.1245 - lr: 1.0000e-05 - 401ms/epoch - 4ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.05138
90/90 - 0s - loss: 0.8119 - val_loss: 0.1247 - lr: 1.0000e-05 - 413ms/epoch - 5ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.05138
90/90 - 0s - loss: 0.8114 - val_loss: 0.1249 - lr: 1.0000e-05 - 385ms/epoch - 4ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.05138
90/90 - 0s - loss: 0.8109 - val_loss: 0.1251 - lr: 1.0000e-05 - 394ms/epoch - 4ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.05138
90/90 - 0s - loss: 0.8104 - val_loss: 0.1253 - lr: 1.0000e-05 - 393ms/epoch - 4ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.05138
90/90 - 0s - loss: 0.8099 - val_loss: 0.1256 - lr: 1.0000e-05 - 378ms/epoch - 4ms/step
Epoch 00051: early stopping
SMA
Prediction vs Close:		55.22% Accuracy
Prediction vs Prediction:	50.37% Accuracy
MSE:	 23.31830336349202 
RMSE:	 4.828902915103183 
MAPE:	 3.806885992834059

EMA
Prediction vs Close:		55.97% Accuracy
Prediction vs Prediction:	47.76% Accuracy
MSE:	 31.4391560756623 
RMSE:	 5.607063052584865 
MAPE:	 4.398444723456604

WMA
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	46.64% Accuracy
MSE:	 48.85272948439791 
RMSE:	 6.989472761546318 
MAPE:	 5.616901258925532

DEMA
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	46.27% Accuracy
MSE:	 143.4471215002686 
RMSE:	 11.976941241413376 
MAPE:	 10.686872819228396

KAMA
Prediction vs Close:		54.48% Accuracy
Prediction vs Prediction:	49.63% Accuracy
MSE:	 23.251670970583447 
RMSE:	 4.821998648961181 
MAPE:	 3.833042253232743

MIDPOINT
Prediction vs Close:		51.12% Accuracy
Prediction vs Prediction:	46.27% Accuracy
MSE:	 16.39872837560197 
RMSE:	 4.0495343405880595 
MAPE:	 3.299619771312048

T3
Prediction vs Close:		53.36% Accuracy
Prediction vs Prediction:	48.13% Accuracy
MSE:	 85.43536380908036 
RMSE:	 9.24312521872772 
MAPE:	 7.5496901284439915

TEMA
Prediction vs Close:		51.87% Accuracy
Prediction vs Prediction:	48.88% Accuracy
MSE:	 17.749245841001986 
RMSE:	 4.21298538343085 
MAPE:	 3.636908590169574
Runtime: mins: 9.928002486749998

Architecture Used

In [ ]:
from google.colab import files
import cv2
uploaded = files.upload()
Upload widget is only available when the cell has been executed in the current browser session. Please rerun this cell to enable.
Saving Experiment4.png to Experiment4 (1).png
In [ ]:
img = cv2.imread('Experiment4.png')
plt.figure(figsize=(20,10))
plt.axis("off")
plt.title('LSTM Architecture '+imgfile,fontsize=18)
plt.imshow(img)
Out[ ]:
<matplotlib.image.AxesImage at 0x7fda6de16150>

Model Plots

In [77]:
with open('simulation4_data.json') as json_file:
    simulation4 = json.load(json_file)
fileimg = 'Experiment4'
In [78]:
for i in range(len(list(simulation4.keys()))):
  SIM = list(simulation4.keys())[i]
  plot_train(simulation4,SIM)
  plot_test(simulation4,SIM)
----- Train RMSE for SMA ----- 0.862136599337807
----- Train_MSE_LSTM for SMA ----- 0.7432795159177583
----- Train MAE LSTM for SMA ----- 0.6923502813471426
----- Test RMSE for SMA----- 4.828902915103183
----- Test_MSE_LSTM for SMA----- 23.31830336349202
----- Test_MAE_LSTM for SMA----- 3.806885992834059
----- Train RMSE for EMA ----- 5.766823380655699
----- Train_MSE_LSTM for EMA ----- 33.25625190367723
----- Train MAE LSTM for EMA ----- 5.764868316083851
----- Test RMSE for EMA----- 5.607063052584865
----- Test_MSE_LSTM for EMA----- 31.4391560756623
----- Test_MAE_LSTM for EMA----- 4.398444723456604
----- Train RMSE for WMA ----- 5.235584448867116
----- Train_MSE_LSTM for WMA ----- 27.41134452121919
----- Train MAE LSTM for WMA ----- 5.227147406870776
----- Test RMSE for WMA----- 6.989472761546318
----- Test_MSE_LSTM for WMA----- 48.85272948439791
----- Test_MAE_LSTM for WMA----- 5.616901258925532
----- Train RMSE for DEMA ----- 8.750974100017887
----- Train_MSE_LSTM for DEMA ----- 76.57954769918385
----- Train MAE LSTM for DEMA ----- 8.750575766705051
----- Test RMSE for DEMA----- 11.976941241413376
----- Test_MSE_LSTM for DEMA----- 143.4471215002686
----- Test_MAE_LSTM for DEMA----- 10.686872819228396
----- Train RMSE for KAMA ----- 2.103880765025744
----- Train_MSE_LSTM for KAMA ----- 4.42631427344531
----- Train MAE LSTM for KAMA ----- 2.0681962896101544
----- Test RMSE for KAMA----- 4.821998648961181
----- Test_MSE_LSTM for KAMA----- 23.251670970583447
----- Test_MAE_LSTM for KAMA----- 3.833042253232743
----- Train RMSE for MIDPOINT ----- 3.6774127068967175
----- Train_MSE_LSTM for MIDPOINT ----- 13.523364216845444
----- Train MAE LSTM for MIDPOINT ----- 3.613733980915334
----- Test RMSE for MIDPOINT----- 4.0495343405880595
----- Test_MSE_LSTM for MIDPOINT----- 16.39872837560197
----- Test_MAE_LSTM for MIDPOINT----- 3.299619771312048
----- Train RMSE for T3 ----- 3.104004527969992
----- Train_MSE_LSTM for T3 ----- 9.634844109658212
----- Train MAE LSTM for T3 ----- 2.999024202327917
----- Test RMSE for T3----- 9.24312521872772
----- Test_MSE_LSTM for T3----- 85.43536380908036
----- Test_MAE_LSTM for T3----- 7.5496901284439915
----- Train RMSE for TEMA ----- 0.9583344967025622
----- Train_MSE_LSTM for TEMA ----- 0.9184050075701532
----- Train MAE LSTM for TEMA ----- 0.7602822898638131
----- Test RMSE for TEMA----- 4.21298538343085
----- Test_MSE_LSTM for TEMA----- 17.749245841001986
----- Test_MAE_LSTM for TEMA----- 3.636908590169574

Arima w Exogenous Variable Multistep MutiVariate LSTM Hybrid Model Experiment 5

In [ ]:
def get_arima_exog(dataframe,original_data, train_len, test_len):    
    

    # prepare train and test data for exogenous vr
    X_value = pd.DataFrame(low_vol.iloc[:, :])
    y_value = pd.DataFrame(low_vol.iloc[:, 3])
    X_scaler = MinMaxScaler(feature_range=(-1, 1))
    y_scaler = MinMaxScaler(feature_range=(-1, 1))
    X_scaler.fit(X_value)
    y_scaler.fit(y_value)
    X_scale_dataset = X_scaler.fit_transform(X_value)
    y_scale_dataset = y_scaler.fit_transform(y_value)
    # Get data and check shape
    # X, y, yc = get_X_y(X_scale_dataset, y_scale_dataset)#X will be of shape 224 X 3 X 21 (each 3 X 21 array will be 3 days' worth of data). yc will have the corresponding closing price value
    # pdb.set_trace()
    X_train, X_test, = split_train_test(X_scale_dataset)
    y_train, y_test, = split_train_test(y_scale_dataset)
    yc_train,yc_test = split_train_test(low_vol_data)
    yc = yc_test.values.tolist()
    y_train_list = y_train.flatten().tolist()
    y_test_list = y_test.flatten().tolist()
    # yc_train, yc_test, = split_train_test(original_data)
    index_train, index_test, = predict_index(dataset_final, X_train, n_steps_in, n_steps_out)

    # Initialize model
    model = auto_arima(y_train_list,exogenous  = X_train,trace=True, error_action='ignore', start_p=1,start_q=1,max_p=3,max_q=3,d=3,
            suppress_warnings=True,stepwise=True,seasonal=True)

      # Determine model parameters
    print(model.summary())
    model.fit(y_train_list,maxiter=200)
    order = model.get_params()['order']
    print('ARIMA order:', order, '\n')

      # Genereate predictions
    prediction = []
    for i in range(len(y_test_list)):
        model = pmdarima.ARIMA(order=order)
        model.fit(y_train_list)
        # print('working on', i+1, 'of', len(y_test), '-- ' + str(int(100 * (i + 1) / len(y_test))) + '% complete')

        prediction.append(model.predict()[0])
        y_train_list.append(y_test_list[i])

    predictionte = y_scaler.inverse_transform(np.array(prediction).reshape(-1,1))
    y_test_ = y_scaler.inverse_transform(np.array(y_test_list).reshape(-1,1))

    # Generate error data
    mse = mean_squared_error(yc_test, predictionte)
    rmse = mse ** 0.5
    mae = mean_absolute_error(y_test_ , predictionte )
    return yc,predictionte.flatten().tolist(), mse, rmse, mae
In [ ]:
def get_lstm(data,original_data, train_len, test_len,img_file,ma ,lstm_len=3):
    # prepare train and test data
    X_value = pd.DataFrame(data.iloc[:, :])
    y_value = pd.DataFrame(data.iloc[:, 3])
    X_scaler = MinMaxScaler(feature_range=(-1, 1))
    y_scaler = MinMaxScaler(feature_range=(-1, 1))
    X_scaler.fit(X_value)
    y_scaler.fit(y_value)
    # Get data and check shape
    X, y, yc = get_X_y(X_scale_dataset, y_scale_dataset)#X will be of shape 224 X 3 X 21 (each 3 X 21 array will be 3 days' worth of data). yc will have the corresponding closing price value
    # pdb.set_trace()
    X_train, X_test, = split_train_test(X)
    y_train, y_test, = split_train_test(y)
    # yc_train, yc_test, = split_train_test(original_data)
    index_train, index_test, = predict_index(dataset_final, X_train, n_steps_in, n_steps_out)
    det = 20
    input_dim = X_train.shape[1]#3
    feature_size = X_train.shape[2]#24
    output_dim = y_train.shape[1]#1



    # Option 1
    # Set up & fit LSTM RNN
    model = Sequential()
    model.add(LSTM(256, activation='relu', kernel_initializer='he_normal', input_shape=(input_dim, feature_size)))
    model.add(Dense(units=64,activation='relu'))
    model.add(Dropout(0.5))
    model.add(Dense(units=output_dim))
    model.compile(optimizer=Adam(learning_rate = 0.001), loss='mse')

    ## Common code
    callbacks = [
    EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    ModelCheckpoint('LSTM5.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    fname1 = img_file+'.png'
    tensorflow.keras.utils.plot_model(
        model, to_file=fname1, show_shapes=True, show_dtype=False,
        show_layer_names=True, expand_nested=False, dpi=96,
        layer_range=None, show_layer_activations=False
    )
    history = model.fit(X_train, y_train, epochs=500, batch_size=int( optimized_period[ma]), verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # plot loss
    fname2 = img_file+'-'+ma
    plt.title(img_file+'-'+ma+' Loss')
    plt.xlabel("Epochs")
    plt.ylabel("Loss")
    pyplot.plot(history.history['loss'], label='train')
    pyplot.plot(history.history['val_loss'], label='validation')
    pyplot.legend()
    pyplot.savefig(fname2+'.png',dpi='figure')
    pyplot.show()


    # # option 2
    # model = Sequential()
    # model.add(Bidirectional(LSTM(units= 128), input_shape=(input_dim, feature_size)))
    # model.add(Dense(64))
    # model.add(Dense(units=output_dim))
    # model.compile(optimizer=Adam(lr = 0.001), loss='mean_squared_error', metrics=['accuracy'])
    # # Common code
    # callbacks = [
    # EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    # ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    # ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    # fname1 = img_file+'.png'
    # tensorflow.keras.utils.plot_model(
    #     model, to_file=fname1, show_shapes=True, show_dtype=False,
    #     show_layer_names=True, expand_nested=False, dpi=96,
    #     layer_range=None, show_layer_activations=False
    # )
    # history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # # plot loss
    # fname2 = img_file+'-'+ma
    # plt.title(img_file+'-'+ma_' Loss')
    # plt.xlabel("Epochs")
    # plt.ylabel("Loss")
    # pyplot.plot(history.history['loss'], label='train')
    # pyplot.plot(history.history['val_loss'], label='validation')
    # pyplot.legend()
    # pyplot.savefig(fname2+'.png',dpi='figure')
    # pyplot.show()

    # Option 3
    # define custom activation
    # 
    # class Double_Tanh(Activation):
    #     def __init__(self, activation, **kwargs):
    #         super(Double_Tanh, self).__init__(activation, **kwargs)
    #         self.__name__ = 'double_tanh'

    # def double_tanh(x):
    #     return (K.tanh(x) * 2)

    # get_custom_objects().update({'double_tanh':Double_Tanh(double_tanh)})
    #     # Model Generation
    # model = Sequential()
    # #check https://machinelearningmastery.com/use-weight-regularization-lstm-networks-time-series-forecasting/
    # model.add(LSTM(25, input_shape=(input_dim, feature_size), dropout=0.2, kernel_regularizer=l1_l2(0.00,0.00), bias_regularizer=l1_l2(0.00,0.00)))
    # model.add(Dense(1))
    # model.add(Activation(double_tanh))
    # model.compile(loss='mean_squared_error', optimizer='adam', metrics=['mse', 'mae'])
    # # Common code
    # callbacks = [
    # EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    # ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    # ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    # fname1 = img_file+'.png'
    # tensorflow.keras.utils.plot_model(
    #     model, to_file=fname1, show_shapes=True, show_dtype=False,
    #     show_layer_names=True, expand_nested=False, dpi=96,
    #     layer_range=None, show_layer_activations=False
    # )
    # history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # # plot loss
    # fname2 = img_file+'-'+ma
    # plt.title(img_file+'-'+ma_' Loss')
    # plt.xlabel("Epochs")
    # plt.ylabel("Loss")
    # pyplot.plot(history.history['loss'], label='train')
    # pyplot.plot(history.history['val_loss'], label='validation')
    # pyplot.legend()
    # pyplot.savefig(fname2+'.png',dpi='figure')
    # pyplot.show()

    # Option 4
    # Set up & fit LSTM RNN
    # model = Sequential()
    # model.add(LSTM(units=lstm_len, return_sequences=True, input_shape=(input_dim, feature_size)))
    # model.add(LSTM(units=int(lstm_len/2)))
    # model.add(Dense(1, activation='sigmoid'))
    # model.compile(loss='mean_squared_error', optimizer='adam')
    # # Common code
    # callbacks = [
    # EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    # ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    # ModelCheckpoint('LSTM5.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    # fname1 = img_file+'.png'
    # tensorflow.keras.utils.plot_model(
    #     model, to_file=fname1, show_shapes=True, show_dtype=False,
    #     show_layer_names=True, expand_nested=False, dpi=96,
    #     layer_range=None, show_layer_activations=False
    # )
    # history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # # plot loss
    # fname2 = img_file+'-'+ma
    # plt.title(img_file+'-'+ma_' Loss')
    # plt.xlabel("Epochs")
    # plt.ylabel("Loss")
    # pyplot.plot(history.history['loss'], label='train')
    # pyplot.plot(history.history['val_loss'], label='validation')
    # pyplot.legend()
    # pyplot.savefig(fname2+'.png',dpi='figure')
    # pyplot.show()



    # Generate predictions
    predictiontr = model.predict(X_train, verbose=0)
    predictiontr = y_scaler.inverse_transform(predictiontr).tolist()
    outputtr = []
    for i in range(len(predictiontr)):
        outputtr.extend(predictiontr[i])
    predictiontr = outputtr
    # Generate error data

    ## replace with yc , xtest generated by new multistep method
    mse_tr = mean_squared_error(y_train, predictiontr)
    rmse_tr = mse_tr ** 0.5
    # mape = mean_absolute_percentage_error(X_test, pd.Series(predictiontr))
    mae_tr = mean_absolute_error(y_train, pd.Series(predictiontr))
    # Original_tr = pd.Series(yc_train)
    Original_tr = y_scaler.inverse_transform(y_train).flatten().tolist()


    predictionte = model.predict(X_test, verbose=0)
    predictionte = (y_scaler.inverse_transform(predictionte)-det).tolist()
    outputte = []
    for i in range(len(predictionte)):
        outputte.extend(predictionte[i])
    predictionte = outputte
    # Generate error data

    mse_te = mean_squared_error(y_test, predictionte)
    rmse_te = mse_te ** 0.5
    # mape = mean_absolute_percentage_error(X_test, pd.Series(predictionte))
    mae_te = mean_absolute_error(y_test, pd.Series(predictionte))
    # Original_te = pd.Series(yc_test)
    Original_te = y_scaler.inverse_transform(y_test).flatten().tolist()

    return Original_tr, predictiontr, mse_tr, rmse_tr,mae_tr,Original_te,predictionte, mse_te,rmse_te,mae_te
In [ ]:
if __name__ == '__main__':
    start_time = timeit.default_timer()
    simulation5 = {}
    imgfile = 'Experiment5'
    for ma in optimized_period:
                print(ma)
                print(functions[ma])
                print ( int( optimized_period[ma]))
              # if ma == 'SMA':
                low_vol = df.apply(lambda c:  functions[ma](c, timeperiod = int( optimized_period[ma])))
                low_vol = low_vol.fillna(0)
                low_vol_data = df['close']
                high_vol = pd.DataFrame()
                df2 = df.copy()
                for i in df2.columns:
                  if i in low_vol.columns:
                    high_vol[i] = df2[i].subtract(low_vol[i], fill_value=0)
                high_vol_data = df['close']
                ## *****************************************************
                # Generate ARIMA and LSTM predictions
                print('\nWorking on ' + ma + ' predictions')
                try:
                  print('parameters used : ', train_len, test_len)
                  low_vol_Original, low_vol_prediction, low_vol_mse, low_vol_rmse,low_vol_mae = get_arima_exog(low_vol,low_vol_data, train_len, test_len)
                except:
                    print('ARIMA error, skipping to next MA type')
                    continue
                Original_tr, predictiontr, mse_tr, rmse_tr,mae_tr,high_vol_Original, high_vol_prediction, high_vol_mse, high_vol_rmse,high_vol_mae, = get_lstm(high_vol,high_vol_data, train_len, test_len,imgfile,ma)
                final_prediction_tr = df['close'].head(train_len).values + pd.Series(predictiontr) # ignoring first 3 steps 
                mse_ftr = mean_squared_error(df['close'].head(train_len).values,final_prediction_tr.values)
                rmse_ftr = mse_ftr ** 0.5
                mape_ftr = mean_absolute_percentage_error(df['close'].head(train_len).reset_index(drop=True), final_prediction_tr)
                mae_ftr = mean_absolute_error(df['close'].head(train_len).reset_index(drop=True), final_prediction_tr)

                final_prediction = pd.Series(low_vol_prediction[3:]) + pd.Series(high_vol_prediction)
                mse = mean_squared_error(df['close'].tail(test_len).values,final_prediction.values)
                rmse = mse ** 0.5
                mape = mean_absolute_percentage_error(df['close'].tail(test_len).reset_index(drop=True), final_prediction)
                mae = mean_absolute_error(df['close'].tail(test_len).reset_index(drop=True), final_prediction)
                # Generate prediction accuracy
                actual = df['close'].tail(test_len).values
                result_1 = []
                result_2 = []
                for i in range(1, len(final_prediction)):
                    # Compare prediction to previous close price
                    if final_prediction[i] > actual[i-1] and actual[i] > actual[i-1]:
                        result_1.append(1)
                    elif final_prediction[i] < actual[i-1] and actual[i] < actual[i-1]:
                        result_1.append(1)
                    else:
                        result_1.append(0)

                    # Compare prediction to previous prediction
                    if final_prediction[i] > final_prediction[i-1] and actual[i] > actual[i-1]:
                        result_2.append(1)
                    elif final_prediction[i] < final_prediction[i-1] and actual[i] < actual[i-1]:
                        result_2.append(1)
                    else:
                        result_2.append(0)

                accuracy_1 = np.mean(result_1)
                accuracy_2 = np.mean(result_2)

                simulation5[ma] = {'low_vol': {'original':list(low_vol_Original), 'prediction': list(low_vol_prediction) , 'mse': low_vol_mse,
                                              'rmse': low_vol_rmse, 'mae' : low_vol_mae},
                                  'high_vol': {'original':list(high_vol_Original),'prediction': list(high_vol_prediction), 'mse': high_vol_mse,
                                              'rmse': high_vol_rmse, 'mae' : high_vol_mae},
                                  'final_tr': {'original':df['close'].head(train_len).tolist(),'prediction': final_prediction_tr.values.tolist(), 'mse': mse_ftr,
                                              'rmse': rmse_ftr, 'mae' : mae_ftr},
                                  'final': {'original': df['close'].tail(test_len).tolist(), 'prediction': final_prediction.values.tolist(), 'mse': mse,
                                            'rmse': rmse, 'mae': mae },
                                  'accuracy': {'prediction vs close': accuracy_1, 'prediction vs prediction': accuracy_2}}

                # save simulation data here as checkpoint
                with open('simulation5_data.json', 'w') as fp:
                    json.dump(simulation5, fp)

                for ma in simulation5.keys():
                    print('\n' + ma)
                    print('Prediction vs Close:\t\t' + str(round(100*simulation5[ma]['accuracy']['prediction vs close'], 2))
                          + '% Accuracy')
                    print('Prediction vs Prediction:\t' + str(round(100*simulation5[ma]['accuracy']['prediction vs prediction'], 2))
                          + '% Accuracy')
                    print('MSE:\t', simulation5[ma]['final']['mse'],
                          '\nRMSE:\t', simulation5[ma]['final']['rmse'],
                          '\nMAPE:\t', simulation5[ma]['final']['mae'])#,
                          # '\nMAPE:\t', simulation[ma]['final']['mape'])
              # else:
              #   break
    elapsed = timeit.default_timer() - start_time
    print('Runtime: mins:',elapsed/60)
SMA
SMA([input_arrays], [timeperiod=30])

Simple Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
17

Working on SMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-15000.708, Time=8.38 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-13492.284, Time=2.29 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-15827.971, Time=8.04 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-13635.197, Time=10.22 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=-14132.778, Time=3.77 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=-15140.312, Time=10.05 sec
 ARIMA(1,3,0)(0,0,0)[0] intercept   : AIC=-13970.469, Time=7.25 sec

Best model:  ARIMA(1,3,0)(0,0,0)[0]          
Total fit time: 50.014 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(1, 3, 0)   Log Likelihood                7936.985
Date:                Sun, 12 Dec 2021   AIC                         -15827.971
Time:                        13:27:20   BIC                         -15720.081
Sample:                             0   HQIC                        -15786.537
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1         -4.786e-05      0.001     -0.066      0.947      -0.001       0.001
x2         -4.789e-05      0.001     -0.085      0.932      -0.001       0.001
x3         -4.819e-05      0.000     -0.105      0.917      -0.001       0.001
x4             1.0000      0.001   1557.248      0.000       0.999       1.001
x5         -4.579e-05      0.001     -0.071      0.943      -0.001       0.001
x6          -5.16e-05      0.000     -0.432      0.666      -0.000       0.000
x7         -4.778e-05      0.000     -0.278      0.781      -0.000       0.000
x8            -0.0012      0.000     -7.403      0.000      -0.002      -0.001
x9         -3.454e-06      0.002     -0.002      0.998      -0.003       0.003
x10           -0.0005      0.001     -0.403      0.687      -0.003       0.002
x11            0.0029      0.000     10.904      0.000       0.002       0.003
x12           -0.0003      0.000     -1.815      0.069      -0.001    2.06e-05
x13        -4.809e-05      0.000     -0.157      0.875      -0.001       0.001
x14           -0.0001      0.000     -0.482      0.630      -0.001       0.000
x15        -5.214e-05      0.000     -0.273      0.785      -0.000       0.000
x16        -4.468e-05      0.000     -0.125      0.901      -0.001       0.001
x17        -4.224e-05      0.000     -0.202      0.840      -0.000       0.000
x18        -8.086e-05      0.000     -0.270      0.787      -0.001       0.001
x19        -5.537e-05      0.000     -0.244      0.807      -0.000       0.000
x20         8.423e-05      0.000      0.333      0.739      -0.000       0.001
x21        -4.232e-05      0.000     -0.166      0.868      -0.001       0.000
ar.L1         -0.6666   6.03e-06  -1.11e+05      0.000      -0.667      -0.667
sigma2      4.093e-10   8.97e-11      4.563      0.000    2.33e-10    5.85e-10
===================================================================================
Ljung-Box (L1) (Q):                  60.24   Jarque-Bera (JB):           1334882.31
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.11   Skew:                            -3.81
Prob(H) (two-sided):                  0.00   Kurtosis:                       202.35
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 5.73e+20. Standard errors may be unstable.
ARIMA order: (1, 3, 0) 

WARNING:tensorflow:Layer lstm_17 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.07596, saving model to LSTM5.h5
48/48 - 2s - loss: 0.4278 - val_loss: 0.0760 - lr: 0.0010 - 2s/epoch - 37ms/step
Epoch 2/500

Epoch 00002: val_loss improved from 0.07596 to 0.01940, saving model to LSTM5.h5
48/48 - 0s - loss: 0.1376 - val_loss: 0.0194 - lr: 0.0010 - 413ms/epoch - 9ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.01940
48/48 - 0s - loss: 0.0839 - val_loss: 0.8476 - lr: 0.0010 - 359ms/epoch - 7ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.01940
48/48 - 0s - loss: 0.0545 - val_loss: 0.1203 - lr: 0.0010 - 372ms/epoch - 8ms/step
Epoch 5/500

Epoch 00005: val_loss improved from 0.01940 to 0.01333, saving model to LSTM5.h5
48/48 - 0s - loss: 0.0497 - val_loss: 0.0133 - lr: 0.0010 - 369ms/epoch - 8ms/step
Epoch 6/500

Epoch 00006: val_loss did not improve from 0.01333
48/48 - 0s - loss: 0.0493 - val_loss: 0.2247 - lr: 0.0010 - 353ms/epoch - 7ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.01333
48/48 - 0s - loss: 0.0531 - val_loss: 0.0729 - lr: 0.0010 - 359ms/epoch - 7ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.01333
48/48 - 0s - loss: 0.0445 - val_loss: 0.0255 - lr: 0.0010 - 370ms/epoch - 8ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.01333
48/48 - 0s - loss: 0.0439 - val_loss: 0.0683 - lr: 0.0010 - 366ms/epoch - 8ms/step
Epoch 10/500

Epoch 00010: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00010: val_loss did not improve from 0.01333
48/48 - 0s - loss: 0.0365 - val_loss: 0.0208 - lr: 0.0010 - 362ms/epoch - 8ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.01333
48/48 - 0s - loss: 0.0370 - val_loss: 0.0241 - lr: 1.0000e-04 - 350ms/epoch - 7ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.01333
48/48 - 0s - loss: 0.0370 - val_loss: 0.0294 - lr: 1.0000e-04 - 397ms/epoch - 8ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.01333
48/48 - 0s - loss: 0.0369 - val_loss: 0.0328 - lr: 1.0000e-04 - 375ms/epoch - 8ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.01333
48/48 - 0s - loss: 0.0400 - val_loss: 0.0344 - lr: 1.0000e-04 - 354ms/epoch - 7ms/step
Epoch 15/500

Epoch 00015: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00015: val_loss did not improve from 0.01333
48/48 - 0s - loss: 0.0391 - val_loss: 0.0289 - lr: 1.0000e-04 - 403ms/epoch - 8ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.01333
48/48 - 0s - loss: 0.0350 - val_loss: 0.0281 - lr: 1.0000e-05 - 349ms/epoch - 7ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.01333
48/48 - 0s - loss: 0.0400 - val_loss: 0.0277 - lr: 1.0000e-05 - 369ms/epoch - 8ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.01333
48/48 - 0s - loss: 0.0353 - val_loss: 0.0275 - lr: 1.0000e-05 - 346ms/epoch - 7ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.01333
48/48 - 0s - loss: 0.0408 - val_loss: 0.0270 - lr: 1.0000e-05 - 359ms/epoch - 7ms/step
Epoch 20/500

Epoch 00020: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00020: val_loss did not improve from 0.01333
48/48 - 0s - loss: 0.0365 - val_loss: 0.0274 - lr: 1.0000e-05 - 349ms/epoch - 7ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.01333
48/48 - 0s - loss: 0.0363 - val_loss: 0.0280 - lr: 1.0000e-05 - 350ms/epoch - 7ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.01333
48/48 - 0s - loss: 0.0400 - val_loss: 0.0282 - lr: 1.0000e-05 - 341ms/epoch - 7ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.01333
48/48 - 0s - loss: 0.0358 - val_loss: 0.0277 - lr: 1.0000e-05 - 354ms/epoch - 7ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.01333
48/48 - 0s - loss: 0.0352 - val_loss: 0.0285 - lr: 1.0000e-05 - 355ms/epoch - 7ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.01333
48/48 - 0s - loss: 0.0385 - val_loss: 0.0286 - lr: 1.0000e-05 - 368ms/epoch - 8ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.01333
48/48 - 0s - loss: 0.0359 - val_loss: 0.0280 - lr: 1.0000e-05 - 340ms/epoch - 7ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.01333
48/48 - 0s - loss: 0.0351 - val_loss: 0.0280 - lr: 1.0000e-05 - 342ms/epoch - 7ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.01333
48/48 - 0s - loss: 0.0402 - val_loss: 0.0276 - lr: 1.0000e-05 - 333ms/epoch - 7ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.01333
48/48 - 0s - loss: 0.0377 - val_loss: 0.0278 - lr: 1.0000e-05 - 377ms/epoch - 8ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.01333
48/48 - 0s - loss: 0.0389 - val_loss: 0.0288 - lr: 1.0000e-05 - 369ms/epoch - 8ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.01333
48/48 - 0s - loss: 0.0366 - val_loss: 0.0283 - lr: 1.0000e-05 - 361ms/epoch - 8ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.01333
48/48 - 0s - loss: 0.0354 - val_loss: 0.0281 - lr: 1.0000e-05 - 362ms/epoch - 8ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.01333
48/48 - 0s - loss: 0.0357 - val_loss: 0.0282 - lr: 1.0000e-05 - 362ms/epoch - 8ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.01333
48/48 - 0s - loss: 0.0365 - val_loss: 0.0276 - lr: 1.0000e-05 - 346ms/epoch - 7ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.01333
48/48 - 0s - loss: 0.0393 - val_loss: 0.0273 - lr: 1.0000e-05 - 356ms/epoch - 7ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.01333
48/48 - 0s - loss: 0.0416 - val_loss: 0.0266 - lr: 1.0000e-05 - 340ms/epoch - 7ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.01333
48/48 - 0s - loss: 0.0360 - val_loss: 0.0263 - lr: 1.0000e-05 - 354ms/epoch - 7ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.01333
48/48 - 0s - loss: 0.0385 - val_loss: 0.0268 - lr: 1.0000e-05 - 354ms/epoch - 7ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.01333
48/48 - 0s - loss: 0.0354 - val_loss: 0.0267 - lr: 1.0000e-05 - 371ms/epoch - 8ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.01333
48/48 - 0s - loss: 0.0376 - val_loss: 0.0260 - lr: 1.0000e-05 - 369ms/epoch - 8ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.01333
48/48 - 0s - loss: 0.0370 - val_loss: 0.0254 - lr: 1.0000e-05 - 406ms/epoch - 8ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.01333
48/48 - 0s - loss: 0.0361 - val_loss: 0.0259 - lr: 1.0000e-05 - 368ms/epoch - 8ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.01333
48/48 - 0s - loss: 0.0388 - val_loss: 0.0262 - lr: 1.0000e-05 - 381ms/epoch - 8ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.01333
48/48 - 0s - loss: 0.0373 - val_loss: 0.0277 - lr: 1.0000e-05 - 356ms/epoch - 7ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.01333
48/48 - 0s - loss: 0.0341 - val_loss: 0.0280 - lr: 1.0000e-05 - 363ms/epoch - 8ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.01333
48/48 - 0s - loss: 0.0388 - val_loss: 0.0282 - lr: 1.0000e-05 - 385ms/epoch - 8ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.01333
48/48 - 0s - loss: 0.0371 - val_loss: 0.0296 - lr: 1.0000e-05 - 363ms/epoch - 8ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.01333
48/48 - 0s - loss: 0.0418 - val_loss: 0.0290 - lr: 1.0000e-05 - 385ms/epoch - 8ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.01333
48/48 - 0s - loss: 0.0371 - val_loss: 0.0296 - lr: 1.0000e-05 - 384ms/epoch - 8ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.01333
48/48 - 0s - loss: 0.0369 - val_loss: 0.0288 - lr: 1.0000e-05 - 347ms/epoch - 7ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.01333
48/48 - 0s - loss: 0.0367 - val_loss: 0.0291 - lr: 1.0000e-05 - 389ms/epoch - 8ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.01333
48/48 - 0s - loss: 0.0307 - val_loss: 0.0293 - lr: 1.0000e-05 - 359ms/epoch - 7ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.01333
48/48 - 0s - loss: 0.0320 - val_loss: 0.0294 - lr: 1.0000e-05 - 356ms/epoch - 7ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.01333
48/48 - 0s - loss: 0.0365 - val_loss: 0.0281 - lr: 1.0000e-05 - 366ms/epoch - 8ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.01333
48/48 - 0s - loss: 0.0369 - val_loss: 0.0275 - lr: 1.0000e-05 - 365ms/epoch - 8ms/step
Epoch 00055: early stopping
SMA
Prediction vs Close:		51.49% Accuracy
Prediction vs Prediction:	48.13% Accuracy
MSE:	 38.984836670221576 
RMSE:	 6.24378384236847 
MAPE:	 5.10393861237253
EMA
EMA([input_arrays], [timeperiod=30])

Exponential Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
51

Working on EMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-17005.775, Time=2.37 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-14574.593, Time=3.89 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-16801.081, Time=8.82 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-14572.593, Time=5.10 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=-14532.068, Time=7.39 sec
 ARIMA(1,3,2)(0,0,0)[0]             : AIC=-15610.472, Time=11.82 sec
 ARIMA(0,3,2)(0,0,0)[0]             : AIC=-16103.302, Time=12.30 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=-17030.021, Time=4.45 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=-17004.614, Time=3.05 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=-17087.441, Time=5.90 sec
 ARIMA(3,3,2)(0,0,0)[0]             : AIC=inf, Time=17.75 sec
/usr/local/lib/python3.7/dist-packages/statsmodels/tsa/statespace/sarimax.py:1906: RuntimeWarning: divide by zero encountered in reciprocal
  return np.roots(self.polynomial_reduced_ma)**-1
 ARIMA(2,3,2)(0,0,0)[0]             : AIC=-17003.984, Time=3.22 sec
 ARIMA(3,3,1)(0,0,0)[0] intercept   : AIC=-16998.666, Time=3.64 sec

Best model:  ARIMA(3,3,1)(0,0,0)[0]          
Total fit time: 89.716 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 1)   Log Likelihood                8569.720
Date:                Sun, 12 Dec 2021   AIC                         -17087.441
Time:                        13:29:53   BIC                         -16965.479
Sample:                             0   HQIC                        -17040.602
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1         -2.316e-10   9.87e-21  -2.35e+10      0.000   -2.32e-10   -2.32e-10
x2         -2.308e-10   9.85e-21  -2.34e+10      0.000   -2.31e-10   -2.31e-10
x3         -2.324e-10   9.88e-21  -2.35e+10      0.000   -2.32e-10   -2.32e-10
x4             1.0000   9.87e-21   1.01e+20      0.000       1.000       1.000
x5         -2.106e-10   9.41e-21  -2.24e+10      0.000   -2.11e-10   -2.11e-10
x6         -7.996e-10   1.74e-20  -4.59e+10      0.000      -8e-10      -8e-10
x7         -2.295e-10   9.82e-21  -2.34e+10      0.000   -2.29e-10   -2.29e-10
x8         -2.244e-10   9.71e-21  -2.31e+10      0.000   -2.24e-10   -2.24e-10
x9         -1.166e-11   1.98e-21   -5.9e+09      0.000   -1.17e-11   -1.17e-11
x10        -4.453e-11   4.22e-21  -1.06e+10      0.000   -4.45e-11   -4.45e-11
x11        -2.219e-10   9.65e-21   -2.3e+10      0.000   -2.22e-10   -2.22e-10
x12        -2.264e-10   9.76e-21  -2.32e+10      0.000   -2.26e-10   -2.26e-10
x13        -2.315e-10   9.87e-21  -2.35e+10      0.000   -2.31e-10   -2.31e-10
x14        -1.766e-09   2.73e-20  -6.48e+10      0.000   -1.77e-09   -1.77e-09
x15        -2.167e-10   9.37e-21  -2.31e+10      0.000   -2.17e-10   -2.17e-10
x16        -5.232e-10   1.49e-20  -3.52e+10      0.000   -5.23e-10   -5.23e-10
x17        -2.147e-10   9.48e-21  -2.27e+10      0.000   -2.15e-10   -2.15e-10
x18        -3.791e-11   3.96e-21  -9.56e+09      0.000   -3.79e-11   -3.79e-11
x19        -2.597e-10   1.05e-20  -2.48e+10      0.000    -2.6e-10    -2.6e-10
x20        -2.417e-10      1e-20  -2.41e+10      0.000   -2.42e-10   -2.42e-10
x21        -4.823e-10    1.4e-20  -3.44e+10      0.000   -4.82e-10   -4.82e-10
ar.L1         -0.4923   1.46e-22  -3.38e+21      0.000      -0.492      -0.492
ar.L2         -0.1923   8.47e-23  -2.27e+21      0.000      -0.192      -0.192
ar.L3         -0.0462   4.02e-23  -1.15e+21      0.000      -0.046      -0.046
ma.L1         -0.7077   3.31e-22  -2.14e+21      0.000      -0.708      -0.708
sigma2       8.99e-11   6.95e-11      1.293      0.196   -4.64e-11    2.26e-10
===================================================================================
Ljung-Box (L1) (Q):                  54.09   Jarque-Bera (JB):           4207353.17
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                             5.48
Prob(H) (two-sided):                  0.00   Kurtosis:                       357.00
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 3.15e+40. Standard errors may be unstable.
ARIMA order: (3, 3, 1) 

WARNING:tensorflow:Layer lstm_18 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.37093, saving model to LSTM5.h5
16/16 - 2s - loss: 0.6061 - val_loss: 0.3709 - lr: 0.0010 - 2s/epoch - 100ms/step
Epoch 2/500

Epoch 00002: val_loss improved from 0.37093 to 0.15384, saving model to LSTM5.h5
16/16 - 0s - loss: 0.2060 - val_loss: 0.1538 - lr: 0.0010 - 155ms/epoch - 10ms/step
Epoch 3/500

Epoch 00003: val_loss improved from 0.15384 to 0.05870, saving model to LSTM5.h5
16/16 - 0s - loss: 0.0942 - val_loss: 0.0587 - lr: 0.0010 - 154ms/epoch - 10ms/step
Epoch 4/500

Epoch 00004: val_loss improved from 0.05870 to 0.01500, saving model to LSTM5.h5
16/16 - 0s - loss: 0.0690 - val_loss: 0.0150 - lr: 0.0010 - 144ms/epoch - 9ms/step
Epoch 5/500

Epoch 00005: val_loss improved from 0.01500 to 0.00667, saving model to LSTM5.h5
16/16 - 0s - loss: 0.0491 - val_loss: 0.0067 - lr: 0.0010 - 172ms/epoch - 11ms/step
Epoch 6/500

Epoch 00006: val_loss did not improve from 0.00667
16/16 - 0s - loss: 0.0536 - val_loss: 0.0068 - lr: 0.0010 - 137ms/epoch - 9ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.00667
16/16 - 0s - loss: 0.0430 - val_loss: 0.0101 - lr: 0.0010 - 137ms/epoch - 9ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.00667
16/16 - 0s - loss: 0.0406 - val_loss: 0.0245 - lr: 0.0010 - 133ms/epoch - 8ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.00667
16/16 - 0s - loss: 0.0375 - val_loss: 0.0093 - lr: 0.0010 - 139ms/epoch - 9ms/step
Epoch 10/500

Epoch 00010: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00010: val_loss did not improve from 0.00667
16/16 - 0s - loss: 0.0355 - val_loss: 0.0169 - lr: 0.0010 - 133ms/epoch - 8ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.00667
16/16 - 0s - loss: 0.0374 - val_loss: 0.0167 - lr: 1.0000e-04 - 149ms/epoch - 9ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.00667
16/16 - 0s - loss: 0.0344 - val_loss: 0.0143 - lr: 1.0000e-04 - 154ms/epoch - 10ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.00667
16/16 - 0s - loss: 0.0325 - val_loss: 0.0138 - lr: 1.0000e-04 - 138ms/epoch - 9ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.00667
16/16 - 0s - loss: 0.0326 - val_loss: 0.0144 - lr: 1.0000e-04 - 133ms/epoch - 8ms/step
Epoch 15/500

Epoch 00015: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00015: val_loss did not improve from 0.00667
16/16 - 0s - loss: 0.0288 - val_loss: 0.0146 - lr: 1.0000e-04 - 126ms/epoch - 8ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.00667
16/16 - 0s - loss: 0.0315 - val_loss: 0.0145 - lr: 1.0000e-05 - 137ms/epoch - 9ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.00667
16/16 - 0s - loss: 0.0359 - val_loss: 0.0145 - lr: 1.0000e-05 - 144ms/epoch - 9ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.00667
16/16 - 0s - loss: 0.0350 - val_loss: 0.0146 - lr: 1.0000e-05 - 149ms/epoch - 9ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.00667
16/16 - 0s - loss: 0.0296 - val_loss: 0.0147 - lr: 1.0000e-05 - 145ms/epoch - 9ms/step
Epoch 20/500

Epoch 00020: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00020: val_loss did not improve from 0.00667
16/16 - 0s - loss: 0.0356 - val_loss: 0.0150 - lr: 1.0000e-05 - 150ms/epoch - 9ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.00667
16/16 - 0s - loss: 0.0354 - val_loss: 0.0152 - lr: 1.0000e-05 - 132ms/epoch - 8ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.00667
16/16 - 0s - loss: 0.0317 - val_loss: 0.0153 - lr: 1.0000e-05 - 127ms/epoch - 8ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.00667
16/16 - 0s - loss: 0.0325 - val_loss: 0.0153 - lr: 1.0000e-05 - 128ms/epoch - 8ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.00667
16/16 - 0s - loss: 0.0334 - val_loss: 0.0154 - lr: 1.0000e-05 - 141ms/epoch - 9ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.00667
16/16 - 0s - loss: 0.0345 - val_loss: 0.0156 - lr: 1.0000e-05 - 156ms/epoch - 10ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.00667
16/16 - 0s - loss: 0.0295 - val_loss: 0.0158 - lr: 1.0000e-05 - 136ms/epoch - 8ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.00667
16/16 - 0s - loss: 0.0315 - val_loss: 0.0160 - lr: 1.0000e-05 - 142ms/epoch - 9ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.00667
16/16 - 0s - loss: 0.0327 - val_loss: 0.0158 - lr: 1.0000e-05 - 134ms/epoch - 8ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.00667
16/16 - 0s - loss: 0.0352 - val_loss: 0.0155 - lr: 1.0000e-05 - 143ms/epoch - 9ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.00667
16/16 - 0s - loss: 0.0317 - val_loss: 0.0154 - lr: 1.0000e-05 - 136ms/epoch - 8ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.00667
16/16 - 0s - loss: 0.0321 - val_loss: 0.0152 - lr: 1.0000e-05 - 140ms/epoch - 9ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.00667
16/16 - 0s - loss: 0.0351 - val_loss: 0.0150 - lr: 1.0000e-05 - 130ms/epoch - 8ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.00667
16/16 - 0s - loss: 0.0314 - val_loss: 0.0150 - lr: 1.0000e-05 - 144ms/epoch - 9ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.00667
16/16 - 0s - loss: 0.0327 - val_loss: 0.0152 - lr: 1.0000e-05 - 129ms/epoch - 8ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.00667
16/16 - 0s - loss: 0.0307 - val_loss: 0.0153 - lr: 1.0000e-05 - 126ms/epoch - 8ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.00667
16/16 - 0s - loss: 0.0327 - val_loss: 0.0153 - lr: 1.0000e-05 - 137ms/epoch - 9ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.00667
16/16 - 0s - loss: 0.0332 - val_loss: 0.0151 - lr: 1.0000e-05 - 126ms/epoch - 8ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.00667
16/16 - 0s - loss: 0.0311 - val_loss: 0.0152 - lr: 1.0000e-05 - 124ms/epoch - 8ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.00667
16/16 - 0s - loss: 0.0309 - val_loss: 0.0154 - lr: 1.0000e-05 - 131ms/epoch - 8ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.00667
16/16 - 0s - loss: 0.0343 - val_loss: 0.0157 - lr: 1.0000e-05 - 141ms/epoch - 9ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.00667
16/16 - 0s - loss: 0.0348 - val_loss: 0.0159 - lr: 1.0000e-05 - 139ms/epoch - 9ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.00667
16/16 - 0s - loss: 0.0308 - val_loss: 0.0157 - lr: 1.0000e-05 - 143ms/epoch - 9ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.00667
16/16 - 0s - loss: 0.0327 - val_loss: 0.0158 - lr: 1.0000e-05 - 145ms/epoch - 9ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.00667
16/16 - 0s - loss: 0.0285 - val_loss: 0.0159 - lr: 1.0000e-05 - 126ms/epoch - 8ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.00667
16/16 - 0s - loss: 0.0334 - val_loss: 0.0160 - lr: 1.0000e-05 - 135ms/epoch - 8ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.00667
16/16 - 0s - loss: 0.0317 - val_loss: 0.0161 - lr: 1.0000e-05 - 133ms/epoch - 8ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.00667
16/16 - 0s - loss: 0.0298 - val_loss: 0.0163 - lr: 1.0000e-05 - 131ms/epoch - 8ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.00667
16/16 - 0s - loss: 0.0325 - val_loss: 0.0161 - lr: 1.0000e-05 - 132ms/epoch - 8ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.00667
16/16 - 0s - loss: 0.0335 - val_loss: 0.0159 - lr: 1.0000e-05 - 162ms/epoch - 10ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.00667
16/16 - 0s - loss: 0.0320 - val_loss: 0.0158 - lr: 1.0000e-05 - 134ms/epoch - 8ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.00667
16/16 - 0s - loss: 0.0327 - val_loss: 0.0159 - lr: 1.0000e-05 - 133ms/epoch - 8ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.00667
16/16 - 0s - loss: 0.0344 - val_loss: 0.0159 - lr: 1.0000e-05 - 144ms/epoch - 9ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.00667
16/16 - 0s - loss: 0.0322 - val_loss: 0.0159 - lr: 1.0000e-05 - 128ms/epoch - 8ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.00667
16/16 - 0s - loss: 0.0327 - val_loss: 0.0158 - lr: 1.0000e-05 - 136ms/epoch - 9ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.00667
16/16 - 0s - loss: 0.0317 - val_loss: 0.0160 - lr: 1.0000e-05 - 132ms/epoch - 8ms/step
Epoch 00055: early stopping
SMA
Prediction vs Close:		51.49% Accuracy
Prediction vs Prediction:	48.13% Accuracy
MSE:	 38.984836670221576 
RMSE:	 6.24378384236847 
MAPE:	 5.10393861237253

EMA
Prediction vs Close:		55.22% Accuracy
Prediction vs Prediction:	46.27% Accuracy
MSE:	 124.25764226707467 
RMSE:	 11.1470912020614 
MAPE:	 9.17724208981177
WMA
WMA([input_arrays], [timeperiod=30])

Weighted Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
49

Working on WMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-14480.432, Time=9.64 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-15747.905, Time=6.37 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-15116.389, Time=7.16 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-13532.115, Time=7.90 sec
 ARIMA(0,3,0)(0,0,0)[0] intercept   : AIC=-13619.624, Time=5.46 sec

Best model:  ARIMA(0,3,0)(0,0,0)[0]          
Total fit time: 36.552 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(0, 3, 0)   Log Likelihood                7895.952
Date:                Sun, 12 Dec 2021   AIC                         -15747.905
Time:                        13:38:05   BIC                         -15644.706
Sample:                             0   HQIC                        -15708.272
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1          3.384e-05    1.9e-05      1.778      0.075   -3.47e-06    7.12e-05
x2          3.379e-05   1.84e-05      1.832      0.067   -2.35e-06    6.99e-05
x3          3.388e-05   4.34e-05      0.781      0.435   -5.12e-05       0.000
x4             1.0000   4.12e-06   2.43e+05      0.000       1.000       1.000
x5          3.227e-05   3.52e-06      9.163      0.000    2.54e-05    3.92e-05
x6          5.559e-05   6.75e-05      0.823      0.410   -7.67e-05       0.000
x7          3.369e-05   2.38e-05      1.415      0.157    -1.3e-05    8.03e-05
x8             0.0023    2.6e-05     86.661      0.000       0.002       0.002
x9          -8.72e-06   7.51e-07    -11.610      0.000   -1.02e-05   -7.25e-06
x10           -0.0023   3.33e-05    -67.770      0.000      -0.002      -0.002
x11            0.0093    2.8e-05    333.459      0.000       0.009       0.009
x12           -0.0118   2.37e-05   -498.171      0.000      -0.012      -0.012
x13         3.382e-05   1.49e-05      2.273      0.023    4.66e-06     6.3e-05
x14         9.271e-05   6.21e-05      1.493      0.135    -2.9e-05       0.000
x15         3.096e-05   1.92e-05      1.614      0.106   -6.63e-06    6.86e-05
x16          5.52e-05   7.17e-05      0.770      0.441   -8.53e-05       0.000
x17          3.38e-05    3.2e-05      1.056      0.291   -2.89e-05    9.65e-05
x18        -6.715e-06   8.34e-05     -0.081      0.936      -0.000       0.000
x19         3.428e-05   2.07e-05      1.654      0.098   -6.34e-06    7.49e-05
x20        -8.089e-06   9.55e-05     -0.085      0.933      -0.000       0.000
x21         4.255e-05      0.000      0.094      0.925      -0.001       0.001
sigma2      2.581e-10   7.87e-11      3.280      0.001    1.04e-10    4.12e-10
===================================================================================
Ljung-Box (L1) (Q):                 362.92   Jarque-Bera (JB):           5047564.68
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.01   Skew:                           -11.23
Prob(H) (two-sided):                  0.00   Kurtosis:                       390.28
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 1.75e+20. Standard errors may be unstable.
ARIMA order: (0, 3, 0) 

WARNING:tensorflow:Layer lstm_19 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
Epoch 1/500

Epoch 00001: val_loss improved from inf to 1.75321, saving model to LSTM5.h5
17/17 - 2s - loss: 0.4551 - val_loss: 1.7532 - lr: 0.0010 - 2s/epoch - 115ms/step
Epoch 2/500

Epoch 00002: val_loss improved from 1.75321 to 0.02672, saving model to LSTM5.h5
17/17 - 0s - loss: 0.1938 - val_loss: 0.0267 - lr: 0.0010 - 156ms/epoch - 9ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.02672
17/17 - 0s - loss: 0.0837 - val_loss: 0.3487 - lr: 0.0010 - 159ms/epoch - 9ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.02672
17/17 - 0s - loss: 0.0603 - val_loss: 0.1463 - lr: 0.0010 - 144ms/epoch - 8ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.02672
17/17 - 0s - loss: 0.0479 - val_loss: 0.0528 - lr: 0.0010 - 143ms/epoch - 8ms/step
Epoch 6/500

Epoch 00006: val_loss did not improve from 0.02672
17/17 - 0s - loss: 0.0416 - val_loss: 0.0410 - lr: 0.0010 - 141ms/epoch - 8ms/step
Epoch 7/500

Epoch 00007: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00007: val_loss did not improve from 0.02672
17/17 - 0s - loss: 0.0415 - val_loss: 0.0579 - lr: 0.0010 - 138ms/epoch - 8ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.02672
17/17 - 0s - loss: 0.0412 - val_loss: 0.0504 - lr: 1.0000e-04 - 138ms/epoch - 8ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.02672
17/17 - 0s - loss: 0.0360 - val_loss: 0.0432 - lr: 1.0000e-04 - 141ms/epoch - 8ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.02672
17/17 - 0s - loss: 0.0359 - val_loss: 0.0440 - lr: 1.0000e-04 - 155ms/epoch - 9ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.02672
17/17 - 0s - loss: 0.0345 - val_loss: 0.0409 - lr: 1.0000e-04 - 148ms/epoch - 9ms/step
Epoch 12/500

Epoch 00012: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00012: val_loss did not improve from 0.02672
17/17 - 0s - loss: 0.0343 - val_loss: 0.0383 - lr: 1.0000e-04 - 159ms/epoch - 9ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.02672
17/17 - 0s - loss: 0.0316 - val_loss: 0.0382 - lr: 1.0000e-05 - 145ms/epoch - 9ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.02672
17/17 - 0s - loss: 0.0318 - val_loss: 0.0382 - lr: 1.0000e-05 - 137ms/epoch - 8ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.02672
17/17 - 0s - loss: 0.0312 - val_loss: 0.0379 - lr: 1.0000e-05 - 139ms/epoch - 8ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.02672
17/17 - 0s - loss: 0.0334 - val_loss: 0.0377 - lr: 1.0000e-05 - 139ms/epoch - 8ms/step
Epoch 17/500

Epoch 00017: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00017: val_loss did not improve from 0.02672
17/17 - 0s - loss: 0.0312 - val_loss: 0.0373 - lr: 1.0000e-05 - 151ms/epoch - 9ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.02672
17/17 - 0s - loss: 0.0325 - val_loss: 0.0371 - lr: 1.0000e-05 - 152ms/epoch - 9ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.02672
17/17 - 0s - loss: 0.0324 - val_loss: 0.0372 - lr: 1.0000e-05 - 137ms/epoch - 8ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.02672
17/17 - 0s - loss: 0.0319 - val_loss: 0.0370 - lr: 1.0000e-05 - 134ms/epoch - 8ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.02672
17/17 - 0s - loss: 0.0316 - val_loss: 0.0369 - lr: 1.0000e-05 - 150ms/epoch - 9ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.02672
17/17 - 0s - loss: 0.0336 - val_loss: 0.0364 - lr: 1.0000e-05 - 131ms/epoch - 8ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.02672
17/17 - 0s - loss: 0.0322 - val_loss: 0.0364 - lr: 1.0000e-05 - 137ms/epoch - 8ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.02672
17/17 - 0s - loss: 0.0319 - val_loss: 0.0360 - lr: 1.0000e-05 - 146ms/epoch - 9ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.02672
17/17 - 0s - loss: 0.0311 - val_loss: 0.0357 - lr: 1.0000e-05 - 142ms/epoch - 8ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.02672
17/17 - 0s - loss: 0.0323 - val_loss: 0.0354 - lr: 1.0000e-05 - 143ms/epoch - 8ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.02672
17/17 - 0s - loss: 0.0327 - val_loss: 0.0349 - lr: 1.0000e-05 - 143ms/epoch - 8ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.02672
17/17 - 0s - loss: 0.0338 - val_loss: 0.0342 - lr: 1.0000e-05 - 147ms/epoch - 9ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.02672
17/17 - 0s - loss: 0.0311 - val_loss: 0.0344 - lr: 1.0000e-05 - 148ms/epoch - 9ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.02672
17/17 - 0s - loss: 0.0354 - val_loss: 0.0344 - lr: 1.0000e-05 - 147ms/epoch - 9ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.02672
17/17 - 0s - loss: 0.0364 - val_loss: 0.0342 - lr: 1.0000e-05 - 163ms/epoch - 10ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.02672
17/17 - 0s - loss: 0.0345 - val_loss: 0.0344 - lr: 1.0000e-05 - 167ms/epoch - 10ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.02672
17/17 - 0s - loss: 0.0340 - val_loss: 0.0343 - lr: 1.0000e-05 - 135ms/epoch - 8ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.02672
17/17 - 0s - loss: 0.0299 - val_loss: 0.0344 - lr: 1.0000e-05 - 144ms/epoch - 8ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.02672
17/17 - 0s - loss: 0.0321 - val_loss: 0.0339 - lr: 1.0000e-05 - 141ms/epoch - 8ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.02672
17/17 - 0s - loss: 0.0285 - val_loss: 0.0336 - lr: 1.0000e-05 - 141ms/epoch - 8ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.02672
17/17 - 0s - loss: 0.0362 - val_loss: 0.0335 - lr: 1.0000e-05 - 174ms/epoch - 10ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.02672
17/17 - 0s - loss: 0.0316 - val_loss: 0.0335 - lr: 1.0000e-05 - 156ms/epoch - 9ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.02672
17/17 - 0s - loss: 0.0310 - val_loss: 0.0331 - lr: 1.0000e-05 - 150ms/epoch - 9ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.02672
17/17 - 0s - loss: 0.0299 - val_loss: 0.0324 - lr: 1.0000e-05 - 154ms/epoch - 9ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.02672
17/17 - 0s - loss: 0.0324 - val_loss: 0.0320 - lr: 1.0000e-05 - 153ms/epoch - 9ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.02672
17/17 - 0s - loss: 0.0334 - val_loss: 0.0330 - lr: 1.0000e-05 - 146ms/epoch - 9ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.02672
17/17 - 0s - loss: 0.0317 - val_loss: 0.0336 - lr: 1.0000e-05 - 137ms/epoch - 8ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.02672
17/17 - 0s - loss: 0.0321 - val_loss: 0.0332 - lr: 1.0000e-05 - 144ms/epoch - 8ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.02672
17/17 - 0s - loss: 0.0312 - val_loss: 0.0333 - lr: 1.0000e-05 - 142ms/epoch - 8ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.02672
17/17 - 0s - loss: 0.0297 - val_loss: 0.0339 - lr: 1.0000e-05 - 140ms/epoch - 8ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.02672
17/17 - 0s - loss: 0.0323 - val_loss: 0.0335 - lr: 1.0000e-05 - 146ms/epoch - 9ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.02672
17/17 - 0s - loss: 0.0329 - val_loss: 0.0340 - lr: 1.0000e-05 - 143ms/epoch - 8ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.02672
17/17 - 0s - loss: 0.0332 - val_loss: 0.0345 - lr: 1.0000e-05 - 134ms/epoch - 8ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.02672
17/17 - 0s - loss: 0.0326 - val_loss: 0.0354 - lr: 1.0000e-05 - 148ms/epoch - 9ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.02672
17/17 - 0s - loss: 0.0283 - val_loss: 0.0355 - lr: 1.0000e-05 - 159ms/epoch - 9ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.02672
17/17 - 0s - loss: 0.0327 - val_loss: 0.0353 - lr: 1.0000e-05 - 159ms/epoch - 9ms/step
Epoch 00052: early stopping
SMA
Prediction vs Close:		51.49% Accuracy
Prediction vs Prediction:	48.13% Accuracy
MSE:	 38.984836670221576 
RMSE:	 6.24378384236847 
MAPE:	 5.10393861237253

EMA
Prediction vs Close:		55.22% Accuracy
Prediction vs Prediction:	46.27% Accuracy
MSE:	 124.25764226707467 
RMSE:	 11.1470912020614 
MAPE:	 9.17724208981177

WMA
Prediction vs Close:		54.48% Accuracy
Prediction vs Prediction:	43.66% Accuracy
MSE:	 38.21405797999008 
RMSE:	 6.1817520154071275 
MAPE:	 5.0592557753421294
DEMA
DEMA([input_arrays], [timeperiod=30])

Double Exponential Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
89

Working on DEMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-17005.774, Time=2.43 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-14574.593, Time=4.13 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-15590.302, Time=7.08 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-14572.593, Time=5.54 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=-15269.503, Time=8.17 sec
 ARIMA(1,3,2)(0,0,0)[0]             : AIC=-16414.961, Time=8.89 sec
 ARIMA(0,3,2)(0,0,0)[0]             : AIC=-16878.396, Time=9.86 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=-17030.019, Time=4.40 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=-17004.613, Time=3.20 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=-17087.441, Time=6.08 sec
 ARIMA(3,3,2)(0,0,0)[0]             : AIC=inf, Time=15.11 sec
/usr/local/lib/python3.7/dist-packages/statsmodels/tsa/statespace/sarimax.py:1906: RuntimeWarning: divide by zero encountered in reciprocal
  return np.roots(self.polynomial_reduced_ma)**-1
 ARIMA(2,3,2)(0,0,0)[0]             : AIC=-17003.985, Time=3.06 sec
 ARIMA(3,3,1)(0,0,0)[0] intercept   : AIC=-16998.665, Time=4.11 sec

Best model:  ARIMA(3,3,1)(0,0,0)[0]          
Total fit time: 82.076 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 1)   Log Likelihood                8569.721
Date:                Sun, 12 Dec 2021   AIC                         -17087.441
Time:                        13:40:00   BIC                         -16965.479
Sample:                             0   HQIC                        -17040.603
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1         -2.799e-10   1.43e-20  -1.96e+10      0.000    -2.8e-10    -2.8e-10
x2         -2.817e-10   1.43e-20  -1.97e+10      0.000   -2.82e-10   -2.82e-10
x3         -2.805e-10   1.43e-20  -1.96e+10      0.000    -2.8e-10    -2.8e-10
x4             1.0000   1.43e-20      7e+19      0.000       1.000       1.000
x5         -2.597e-10   1.37e-20  -1.89e+10      0.000    -2.6e-10    -2.6e-10
x6         -1.388e-09   3.12e-20  -4.45e+10      0.000   -1.39e-09   -1.39e-09
x7         -2.789e-10   1.42e-20  -1.96e+10      0.000   -2.79e-10   -2.79e-10
x8          -2.76e-10   1.42e-20  -1.95e+10      0.000   -2.76e-10   -2.76e-10
x9         -2.216e-12   3.53e-22  -6.28e+09      0.000   -2.22e-12   -2.22e-12
x10        -1.345e-10   9.82e-21  -1.37e+10      0.000   -1.34e-10   -1.34e-10
x11        -2.898e-10   1.45e-20     -2e+10      0.000    -2.9e-10    -2.9e-10
x12        -2.602e-10   1.38e-20  -1.89e+10      0.000    -2.6e-10    -2.6e-10
x13        -2.807e-10   1.43e-20  -1.96e+10      0.000   -2.81e-10   -2.81e-10
x14         -1.87e-09   3.69e-20  -5.07e+10      0.000   -1.87e-09   -1.87e-09
x15        -2.726e-10   1.43e-20   -1.9e+10      0.000   -2.73e-10   -2.73e-10
x16        -7.915e-11   7.68e-21  -1.03e+10      0.000   -7.92e-11   -7.92e-11
x17        -2.606e-10   1.33e-20  -1.96e+10      0.000   -2.61e-10   -2.61e-10
x18        -6.408e-10   2.16e-20  -2.97e+10      0.000   -6.41e-10   -6.41e-10
x19        -2.881e-10   1.46e-20  -1.98e+10      0.000   -2.88e-10   -2.88e-10
x20        -4.337e-10   1.78e-20  -2.44e+10      0.000   -4.34e-10   -4.34e-10
x21        -4.549e-10   1.79e-20  -2.55e+10      0.000   -4.55e-10   -4.55e-10
ar.L1         -0.4923   1.46e-22  -3.38e+21      0.000      -0.492      -0.492
ar.L2         -0.1923   8.47e-23  -2.27e+21      0.000      -0.192      -0.192
ar.L3         -0.0461   4.02e-23  -1.15e+21      0.000      -0.046      -0.046
ma.L1         -0.7078   3.31e-22  -2.14e+21      0.000      -0.708      -0.708
sigma2       8.99e-11   6.95e-11      1.293      0.196   -4.64e-11    2.26e-10
===================================================================================
Ljung-Box (L1) (Q):                  55.07   Jarque-Bera (JB):           4171695.82
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                             5.26
Prob(H) (two-sided):                  0.00   Kurtosis:                       355.51
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 2.62e+41. Standard errors may be unstable.
ARIMA order: (3, 3, 1) 

WARNING:tensorflow:Layer lstm_20 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.04424, saving model to LSTM5.h5
10/10 - 2s - loss: 0.2591 - val_loss: 0.0442 - lr: 0.0010 - 2s/epoch - 154ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.04424
10/10 - 0s - loss: 0.0915 - val_loss: 0.1028 - lr: 0.0010 - 88ms/epoch - 9ms/step
Epoch 3/500

Epoch 00003: val_loss improved from 0.04424 to 0.02063, saving model to LSTM5.h5
10/10 - 0s - loss: 0.0706 - val_loss: 0.0206 - lr: 0.0010 - 110ms/epoch - 11ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.02063
10/10 - 0s - loss: 0.0560 - val_loss: 0.0392 - lr: 0.0010 - 104ms/epoch - 10ms/step
Epoch 5/500

Epoch 00005: val_loss improved from 0.02063 to 0.01708, saving model to LSTM5.h5
10/10 - 0s - loss: 0.0487 - val_loss: 0.0171 - lr: 0.0010 - 109ms/epoch - 11ms/step
Epoch 6/500

Epoch 00006: val_loss did not improve from 0.01708
10/10 - 0s - loss: 0.0401 - val_loss: 0.0385 - lr: 0.0010 - 98ms/epoch - 10ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.01708
10/10 - 0s - loss: 0.0367 - val_loss: 0.0188 - lr: 0.0010 - 114ms/epoch - 11ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.01708
10/10 - 0s - loss: 0.0320 - val_loss: 0.0195 - lr: 0.0010 - 104ms/epoch - 10ms/step
Epoch 9/500

Epoch 00009: val_loss improved from 0.01708 to 0.01701, saving model to LSTM5.h5
10/10 - 0s - loss: 0.0301 - val_loss: 0.0170 - lr: 0.0010 - 117ms/epoch - 12ms/step
Epoch 10/500

Epoch 00010: val_loss improved from 0.01701 to 0.01641, saving model to LSTM5.h5
10/10 - 0s - loss: 0.0302 - val_loss: 0.0164 - lr: 0.0010 - 119ms/epoch - 12ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.01641
10/10 - 0s - loss: 0.0299 - val_loss: 0.0195 - lr: 0.0010 - 92ms/epoch - 9ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.01641
10/10 - 0s - loss: 0.0243 - val_loss: 0.0169 - lr: 0.0010 - 88ms/epoch - 9ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.01641
10/10 - 0s - loss: 0.0264 - val_loss: 0.0206 - lr: 0.0010 - 87ms/epoch - 9ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.01641
10/10 - 0s - loss: 0.0269 - val_loss: 0.0205 - lr: 0.0010 - 89ms/epoch - 9ms/step
Epoch 15/500

Epoch 00015: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00015: val_loss did not improve from 0.01641
10/10 - 0s - loss: 0.0265 - val_loss: 0.0252 - lr: 0.0010 - 91ms/epoch - 9ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.01641
10/10 - 0s - loss: 0.0289 - val_loss: 0.0210 - lr: 1.0000e-04 - 88ms/epoch - 9ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.01641
10/10 - 0s - loss: 0.0235 - val_loss: 0.0189 - lr: 1.0000e-04 - 95ms/epoch - 9ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.01641
10/10 - 0s - loss: 0.0228 - val_loss: 0.0185 - lr: 1.0000e-04 - 101ms/epoch - 10ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.01641
10/10 - 0s - loss: 0.0244 - val_loss: 0.0186 - lr: 1.0000e-04 - 99ms/epoch - 10ms/step
Epoch 20/500

Epoch 00020: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00020: val_loss did not improve from 0.01641
10/10 - 0s - loss: 0.0216 - val_loss: 0.0188 - lr: 1.0000e-04 - 91ms/epoch - 9ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.01641
10/10 - 0s - loss: 0.0237 - val_loss: 0.0189 - lr: 1.0000e-05 - 102ms/epoch - 10ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.01641
10/10 - 0s - loss: 0.0217 - val_loss: 0.0190 - lr: 1.0000e-05 - 95ms/epoch - 10ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.01641
10/10 - 0s - loss: 0.0212 - val_loss: 0.0190 - lr: 1.0000e-05 - 87ms/epoch - 9ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.01641
10/10 - 0s - loss: 0.0228 - val_loss: 0.0190 - lr: 1.0000e-05 - 88ms/epoch - 9ms/step
Epoch 25/500

Epoch 00025: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00025: val_loss did not improve from 0.01641
10/10 - 0s - loss: 0.0227 - val_loss: 0.0190 - lr: 1.0000e-05 - 95ms/epoch - 9ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.01641
10/10 - 0s - loss: 0.0232 - val_loss: 0.0190 - lr: 1.0000e-05 - 88ms/epoch - 9ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.01641
10/10 - 0s - loss: 0.0223 - val_loss: 0.0190 - lr: 1.0000e-05 - 98ms/epoch - 10ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.01641
10/10 - 0s - loss: 0.0212 - val_loss: 0.0190 - lr: 1.0000e-05 - 91ms/epoch - 9ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.01641
10/10 - 0s - loss: 0.0229 - val_loss: 0.0190 - lr: 1.0000e-05 - 102ms/epoch - 10ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.01641
10/10 - 0s - loss: 0.0224 - val_loss: 0.0190 - lr: 1.0000e-05 - 105ms/epoch - 10ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.01641
10/10 - 0s - loss: 0.0225 - val_loss: 0.0190 - lr: 1.0000e-05 - 98ms/epoch - 10ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.01641
10/10 - 0s - loss: 0.0221 - val_loss: 0.0190 - lr: 1.0000e-05 - 94ms/epoch - 9ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.01641
10/10 - 0s - loss: 0.0224 - val_loss: 0.0191 - lr: 1.0000e-05 - 89ms/epoch - 9ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.01641
10/10 - 0s - loss: 0.0243 - val_loss: 0.0191 - lr: 1.0000e-05 - 86ms/epoch - 9ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.01641
10/10 - 0s - loss: 0.0232 - val_loss: 0.0191 - lr: 1.0000e-05 - 91ms/epoch - 9ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.01641
10/10 - 0s - loss: 0.0231 - val_loss: 0.0191 - lr: 1.0000e-05 - 97ms/epoch - 10ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.01641
10/10 - 0s - loss: 0.0226 - val_loss: 0.0191 - lr: 1.0000e-05 - 102ms/epoch - 10ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.01641
10/10 - 0s - loss: 0.0248 - val_loss: 0.0191 - lr: 1.0000e-05 - 106ms/epoch - 11ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.01641
10/10 - 0s - loss: 0.0226 - val_loss: 0.0191 - lr: 1.0000e-05 - 90ms/epoch - 9ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.01641
10/10 - 0s - loss: 0.0253 - val_loss: 0.0191 - lr: 1.0000e-05 - 102ms/epoch - 10ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.01641
10/10 - 0s - loss: 0.0244 - val_loss: 0.0190 - lr: 1.0000e-05 - 103ms/epoch - 10ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.01641
10/10 - 0s - loss: 0.0229 - val_loss: 0.0189 - lr: 1.0000e-05 - 105ms/epoch - 10ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.01641
10/10 - 0s - loss: 0.0237 - val_loss: 0.0189 - lr: 1.0000e-05 - 99ms/epoch - 10ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.01641
10/10 - 0s - loss: 0.0239 - val_loss: 0.0188 - lr: 1.0000e-05 - 95ms/epoch - 9ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.01641
10/10 - 0s - loss: 0.0230 - val_loss: 0.0187 - lr: 1.0000e-05 - 92ms/epoch - 9ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.01641
10/10 - 0s - loss: 0.0217 - val_loss: 0.0187 - lr: 1.0000e-05 - 86ms/epoch - 9ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.01641
10/10 - 0s - loss: 0.0233 - val_loss: 0.0188 - lr: 1.0000e-05 - 93ms/epoch - 9ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.01641
10/10 - 0s - loss: 0.0259 - val_loss: 0.0188 - lr: 1.0000e-05 - 113ms/epoch - 11ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.01641
10/10 - 0s - loss: 0.0232 - val_loss: 0.0188 - lr: 1.0000e-05 - 101ms/epoch - 10ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.01641
10/10 - 0s - loss: 0.0226 - val_loss: 0.0188 - lr: 1.0000e-05 - 111ms/epoch - 11ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.01641
10/10 - 0s - loss: 0.0219 - val_loss: 0.0187 - lr: 1.0000e-05 - 93ms/epoch - 9ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.01641
10/10 - 0s - loss: 0.0226 - val_loss: 0.0187 - lr: 1.0000e-05 - 98ms/epoch - 10ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.01641
10/10 - 0s - loss: 0.0225 - val_loss: 0.0186 - lr: 1.0000e-05 - 95ms/epoch - 9ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.01641
10/10 - 0s - loss: 0.0225 - val_loss: 0.0187 - lr: 1.0000e-05 - 88ms/epoch - 9ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.01641
10/10 - 0s - loss: 0.0265 - val_loss: 0.0187 - lr: 1.0000e-05 - 89ms/epoch - 9ms/step
Epoch 56/500

Epoch 00056: val_loss did not improve from 0.01641
10/10 - 0s - loss: 0.0218 - val_loss: 0.0188 - lr: 1.0000e-05 - 87ms/epoch - 9ms/step
Epoch 57/500

Epoch 00057: val_loss did not improve from 0.01641
10/10 - 0s - loss: 0.0216 - val_loss: 0.0188 - lr: 1.0000e-05 - 95ms/epoch - 9ms/step
Epoch 58/500

Epoch 00058: val_loss did not improve from 0.01641
10/10 - 0s - loss: 0.0230 - val_loss: 0.0188 - lr: 1.0000e-05 - 105ms/epoch - 11ms/step
Epoch 59/500

Epoch 00059: val_loss did not improve from 0.01641
10/10 - 0s - loss: 0.0216 - val_loss: 0.0189 - lr: 1.0000e-05 - 108ms/epoch - 11ms/step
Epoch 60/500

Epoch 00060: val_loss did not improve from 0.01641
10/10 - 0s - loss: 0.0209 - val_loss: 0.0188 - lr: 1.0000e-05 - 94ms/epoch - 9ms/step
Epoch 00060: early stopping
SMA
Prediction vs Close:		51.49% Accuracy
Prediction vs Prediction:	48.13% Accuracy
MSE:	 38.984836670221576 
RMSE:	 6.24378384236847 
MAPE:	 5.10393861237253

EMA
Prediction vs Close:		55.22% Accuracy
Prediction vs Prediction:	46.27% Accuracy
MSE:	 124.25764226707467 
RMSE:	 11.1470912020614 
MAPE:	 9.17724208981177

WMA
Prediction vs Close:		54.48% Accuracy
Prediction vs Prediction:	43.66% Accuracy
MSE:	 38.21405797999008 
RMSE:	 6.1817520154071275 
MAPE:	 5.0592557753421294

DEMA
Prediction vs Close:		51.87% Accuracy
Prediction vs Prediction:	42.54% Accuracy
MSE:	 272.80520892035037 
RMSE:	 16.51681594376926 
MAPE:	 15.690440427295842
KAMA
KAMA([input_arrays], [timeperiod=30])

Kaufman Adaptive Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
18

Working on KAMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-17005.902, Time=2.37 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-14574.593, Time=3.96 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-16796.316, Time=8.22 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-14572.593, Time=5.31 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=-17004.193, Time=2.48 sec
 ARIMA(1,3,2)(0,0,0)[0]             : AIC=-15176.063, Time=11.13 sec
 ARIMA(0,3,2)(0,0,0)[0]             : AIC=-16873.638, Time=11.29 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=-17008.756, Time=2.48 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=-17004.764, Time=3.22 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=-15723.849, Time=14.02 sec
 ARIMA(2,3,0)(0,0,0)[0] intercept   : AIC=-17006.756, Time=2.68 sec

Best model:  ARIMA(2,3,0)(0,0,0)[0]          
Total fit time: 67.184 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(2, 3, 0)   Log Likelihood                8528.378
Date:                Sun, 12 Dec 2021   AIC                         -17008.756
Time:                        13:49:14   BIC                         -16896.176
Sample:                             0   HQIC                        -16965.520
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1          -2.24e-15   7.41e-26  -3.02e+10      0.000   -2.24e-15   -2.24e-15
x2          8.461e-16    6.6e-26   1.28e+10      0.000    8.46e-16    8.46e-16
x3          4.901e-16   6.89e-26   7.11e+09      0.000     4.9e-16     4.9e-16
x4             1.0000   6.96e-26   1.44e+25      0.000       1.000       1.000
x5          5.931e-15   6.61e-26   8.97e+10      0.000    5.93e-15    5.93e-15
x6          -1.05e-15    1.5e-25     -7e+09      0.000   -1.05e-15   -1.05e-15
x7          1.439e-15   6.87e-26    2.1e+10      0.000    1.44e-15    1.44e-15
x8          -1.25e-15    6.8e-26  -1.84e+10      0.000   -1.25e-15   -1.25e-15
x9         -9.356e-17   8.97e-27  -1.04e+10      0.000   -9.36e-17   -9.36e-17
x10        -1.145e-16   2.88e-26  -3.98e+09      0.000   -1.15e-16   -1.15e-16
x11        -2.036e-16    6.8e-26     -3e+09      0.000   -2.04e-16   -2.04e-16
x12         5.951e-16   6.76e-26   8.81e+09      0.000    5.95e-16    5.95e-16
x13        -6.117e-17   6.94e-26  -8.81e+08      0.000   -6.12e-17   -6.12e-17
x14         1.167e-15   1.99e-25   5.85e+09      0.000    1.17e-15    1.17e-15
x15        -4.274e-14   6.99e-26  -6.11e+11      0.000   -4.27e-14   -4.27e-14
x16         2.262e-14   8.56e-26   2.64e+11      0.000    2.26e-14    2.26e-14
x17         3.384e-14   6.46e-26   5.24e+11      0.000    3.38e-14    3.38e-14
x18         9.894e-16    5.8e-26   1.71e+10      0.000    9.89e-16    9.89e-16
x19         4.115e-14   7.75e-26   5.31e+11      0.000    4.12e-14    4.12e-14
x20        -2.176e-15   9.49e-26  -2.29e+10      0.000   -2.18e-15   -2.18e-15
x21        -7.755e-17   4.63e-26  -1.67e+09      0.000   -7.75e-17   -7.75e-17
ar.L1         -0.9988   9.76e-22  -1.02e+21      0.000      -0.999      -0.999
ar.L2         -0.4972   4.07e-23  -1.22e+22      0.000      -0.497      -0.497
sigma2          1e-10   6.99e-11      1.432      0.152   -3.69e-11    2.37e-10
===================================================================================
Ljung-Box (L1) (Q):                  31.54   Jarque-Bera (JB):           2432532.03
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                            -0.15
Prob(H) (two-sided):                  0.00   Kurtosis:                       272.30
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 7.19e+48. Standard errors may be unstable.
ARIMA order: (2, 3, 0) 

WARNING:tensorflow:Layer lstm_21 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.01947, saving model to LSTM5.h5
45/45 - 2s - loss: 0.5053 - val_loss: 0.0195 - lr: 0.0010 - 2s/epoch - 39ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.01947
45/45 - 0s - loss: 0.1397 - val_loss: 1.1759 - lr: 0.0010 - 359ms/epoch - 8ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.01947
45/45 - 0s - loss: 0.0768 - val_loss: 0.0723 - lr: 0.0010 - 347ms/epoch - 8ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.01947
45/45 - 0s - loss: 0.0590 - val_loss: 0.2221 - lr: 0.0010 - 330ms/epoch - 7ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.01947
45/45 - 0s - loss: 0.0556 - val_loss: 0.0620 - lr: 0.0010 - 327ms/epoch - 7ms/step
Epoch 6/500

Epoch 00006: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00006: val_loss did not improve from 0.01947
45/45 - 0s - loss: 0.0542 - val_loss: 0.0496 - lr: 0.0010 - 364ms/epoch - 8ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.01947
45/45 - 0s - loss: 0.0763 - val_loss: 0.0557 - lr: 1.0000e-04 - 335ms/epoch - 7ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.01947
45/45 - 0s - loss: 0.0469 - val_loss: 0.0550 - lr: 1.0000e-04 - 359ms/epoch - 8ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.01947
45/45 - 0s - loss: 0.0404 - val_loss: 0.0536 - lr: 1.0000e-04 - 355ms/epoch - 8ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.01947
45/45 - 0s - loss: 0.0466 - val_loss: 0.0530 - lr: 1.0000e-04 - 328ms/epoch - 7ms/step
Epoch 11/500

Epoch 00011: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00011: val_loss did not improve from 0.01947
45/45 - 0s - loss: 0.0428 - val_loss: 0.0477 - lr: 1.0000e-04 - 324ms/epoch - 7ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.01947
45/45 - 0s - loss: 0.0476 - val_loss: 0.0474 - lr: 1.0000e-05 - 346ms/epoch - 8ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.01947
45/45 - 0s - loss: 0.0398 - val_loss: 0.0472 - lr: 1.0000e-05 - 319ms/epoch - 7ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.01947
45/45 - 0s - loss: 0.0426 - val_loss: 0.0473 - lr: 1.0000e-05 - 333ms/epoch - 7ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.01947
45/45 - 0s - loss: 0.0424 - val_loss: 0.0467 - lr: 1.0000e-05 - 349ms/epoch - 8ms/step
Epoch 16/500

Epoch 00016: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00016: val_loss did not improve from 0.01947
45/45 - 0s - loss: 0.0411 - val_loss: 0.0461 - lr: 1.0000e-05 - 336ms/epoch - 7ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.01947
45/45 - 0s - loss: 0.0408 - val_loss: 0.0465 - lr: 1.0000e-05 - 351ms/epoch - 8ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.01947
45/45 - 0s - loss: 0.0400 - val_loss: 0.0456 - lr: 1.0000e-05 - 373ms/epoch - 8ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.01947
45/45 - 0s - loss: 0.0446 - val_loss: 0.0448 - lr: 1.0000e-05 - 338ms/epoch - 8ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.01947
45/45 - 0s - loss: 0.0456 - val_loss: 0.0443 - lr: 1.0000e-05 - 334ms/epoch - 7ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.01947
45/45 - 0s - loss: 0.0409 - val_loss: 0.0434 - lr: 1.0000e-05 - 343ms/epoch - 8ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.01947
45/45 - 0s - loss: 0.0415 - val_loss: 0.0430 - lr: 1.0000e-05 - 330ms/epoch - 7ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.01947
45/45 - 0s - loss: 0.0415 - val_loss: 0.0430 - lr: 1.0000e-05 - 363ms/epoch - 8ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.01947
45/45 - 0s - loss: 0.0387 - val_loss: 0.0432 - lr: 1.0000e-05 - 368ms/epoch - 8ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.01947
45/45 - 0s - loss: 0.0437 - val_loss: 0.0431 - lr: 1.0000e-05 - 330ms/epoch - 7ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.01947
45/45 - 0s - loss: 0.0393 - val_loss: 0.0427 - lr: 1.0000e-05 - 344ms/epoch - 8ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.01947
45/45 - 0s - loss: 0.0434 - val_loss: 0.0421 - lr: 1.0000e-05 - 332ms/epoch - 7ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.01947
45/45 - 0s - loss: 0.0385 - val_loss: 0.0418 - lr: 1.0000e-05 - 338ms/epoch - 8ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.01947
45/45 - 0s - loss: 0.0362 - val_loss: 0.0421 - lr: 1.0000e-05 - 357ms/epoch - 8ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.01947
45/45 - 0s - loss: 0.0372 - val_loss: 0.0413 - lr: 1.0000e-05 - 339ms/epoch - 8ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.01947
45/45 - 0s - loss: 0.0385 - val_loss: 0.0401 - lr: 1.0000e-05 - 343ms/epoch - 8ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.01947
45/45 - 0s - loss: 0.0411 - val_loss: 0.0400 - lr: 1.0000e-05 - 349ms/epoch - 8ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.01947
45/45 - 0s - loss: 0.0386 - val_loss: 0.0397 - lr: 1.0000e-05 - 353ms/epoch - 8ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.01947
45/45 - 0s - loss: 0.0465 - val_loss: 0.0392 - lr: 1.0000e-05 - 327ms/epoch - 7ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.01947
45/45 - 0s - loss: 0.0367 - val_loss: 0.0391 - lr: 1.0000e-05 - 344ms/epoch - 8ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.01947
45/45 - 0s - loss: 0.0406 - val_loss: 0.0396 - lr: 1.0000e-05 - 332ms/epoch - 7ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.01947
45/45 - 0s - loss: 0.0380 - val_loss: 0.0391 - lr: 1.0000e-05 - 349ms/epoch - 8ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.01947
45/45 - 0s - loss: 0.0386 - val_loss: 0.0385 - lr: 1.0000e-05 - 383ms/epoch - 9ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.01947
45/45 - 0s - loss: 0.0417 - val_loss: 0.0399 - lr: 1.0000e-05 - 335ms/epoch - 7ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.01947
45/45 - 0s - loss: 0.0410 - val_loss: 0.0407 - lr: 1.0000e-05 - 342ms/epoch - 8ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.01947
45/45 - 0s - loss: 0.0394 - val_loss: 0.0403 - lr: 1.0000e-05 - 336ms/epoch - 7ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.01947
45/45 - 0s - loss: 0.0391 - val_loss: 0.0385 - lr: 1.0000e-05 - 333ms/epoch - 7ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.01947
45/45 - 0s - loss: 0.0409 - val_loss: 0.0370 - lr: 1.0000e-05 - 345ms/epoch - 8ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.01947
45/45 - 0s - loss: 0.0416 - val_loss: 0.0363 - lr: 1.0000e-05 - 352ms/epoch - 8ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.01947
45/45 - 0s - loss: 0.0378 - val_loss: 0.0346 - lr: 1.0000e-05 - 329ms/epoch - 7ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.01947
45/45 - 0s - loss: 0.0374 - val_loss: 0.0336 - lr: 1.0000e-05 - 375ms/epoch - 8ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.01947
45/45 - 0s - loss: 0.0405 - val_loss: 0.0331 - lr: 1.0000e-05 - 356ms/epoch - 8ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.01947
45/45 - 0s - loss: 0.0399 - val_loss: 0.0332 - lr: 1.0000e-05 - 337ms/epoch - 7ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.01947
45/45 - 0s - loss: 0.0387 - val_loss: 0.0318 - lr: 1.0000e-05 - 358ms/epoch - 8ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.01947
45/45 - 0s - loss: 0.0379 - val_loss: 0.0309 - lr: 1.0000e-05 - 353ms/epoch - 8ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.01947
45/45 - 0s - loss: 0.0414 - val_loss: 0.0309 - lr: 1.0000e-05 - 340ms/epoch - 8ms/step
Epoch 00051: early stopping
SMA
Prediction vs Close:		51.49% Accuracy
Prediction vs Prediction:	48.13% Accuracy
MSE:	 38.984836670221576 
RMSE:	 6.24378384236847 
MAPE:	 5.10393861237253

EMA
Prediction vs Close:		55.22% Accuracy
Prediction vs Prediction:	46.27% Accuracy
MSE:	 124.25764226707467 
RMSE:	 11.1470912020614 
MAPE:	 9.17724208981177

WMA
Prediction vs Close:		54.48% Accuracy
Prediction vs Prediction:	43.66% Accuracy
MSE:	 38.21405797999008 
RMSE:	 6.1817520154071275 
MAPE:	 5.0592557753421294

DEMA
Prediction vs Close:		51.87% Accuracy
Prediction vs Prediction:	42.54% Accuracy
MSE:	 272.80520892035037 
RMSE:	 16.51681594376926 
MAPE:	 15.690440427295842

KAMA
Prediction vs Close:		55.6% Accuracy
Prediction vs Prediction:	45.9% Accuracy
MSE:	 39.855043502438484 
RMSE:	 6.313085101789654 
MAPE:	 4.932118299391016
MIDPOINT
MIDPOINT([input_arrays], [timeperiod=14])

MidPoint over period (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 14
Outputs:
    real
14

Working on MIDPOINT predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-17005.753, Time=2.42 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-14574.592, Time=4.12 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-16288.639, Time=11.15 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-14572.592, Time=5.20 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=-15275.254, Time=7.02 sec
 ARIMA(1,3,2)(0,0,0)[0]             : AIC=-15486.751, Time=12.72 sec
 ARIMA(0,3,2)(0,0,0)[0]             : AIC=48.000, Time=0.47 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=-17008.491, Time=2.48 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=-17004.554, Time=3.01 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=-17087.445, Time=6.05 sec
 ARIMA(3,3,2)(0,0,0)[0]             : AIC=-15686.421, Time=9.57 sec
 ARIMA(2,3,2)(0,0,0)[0]             : AIC=-17030.168, Time=15.40 sec
 ARIMA(3,3,1)(0,0,0)[0] intercept   : AIC=-15138.715, Time=14.83 sec

Best model:  ARIMA(3,3,1)(0,0,0)[0]          
Total fit time: 94.480 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 1)   Log Likelihood                8569.722
Date:                Sun, 12 Dec 2021   AIC                         -17087.445
Time:                        13:51:59   BIC                         -16965.483
Sample:                             0   HQIC                        -17040.607
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1          -2.14e-10   1.09e-20  -1.96e+10      0.000   -2.14e-10   -2.14e-10
x2         -2.126e-10   1.13e-20  -1.88e+10      0.000   -2.13e-10   -2.13e-10
x3         -2.175e-10   1.06e-20  -2.04e+10      0.000   -2.17e-10   -2.17e-10
x4             1.0000    1.1e-20   9.11e+19      0.000       1.000       1.000
x5         -1.941e-10   1.05e-20  -1.86e+10      0.000   -1.94e-10   -1.94e-10
x6         -4.131e-09   7.64e-20   -5.4e+10      0.000   -4.13e-09   -4.13e-09
x7         -1.965e-10   1.05e-20  -1.86e+10      0.000   -1.96e-10   -1.96e-10
x8         -1.961e-10   1.07e-20  -1.84e+10      0.000   -1.96e-10   -1.96e-10
x9         -1.005e-10   9.12e-22   -1.1e+11      0.000      -1e-10      -1e-10
x10        -1.739e-10   3.37e-21  -5.16e+10      0.000   -1.74e-10   -1.74e-10
x11        -1.941e-10   1.07e-20  -1.82e+10      0.000   -1.94e-10   -1.94e-10
x12        -2.005e-10   1.06e-20  -1.89e+10      0.000      -2e-10      -2e-10
x13        -2.056e-10   1.07e-20  -1.91e+10      0.000   -2.06e-10   -2.06e-10
x14        -1.687e-09   3.15e-20  -5.36e+10      0.000   -1.69e-09   -1.69e-09
x15        -2.365e-10   1.17e-20  -2.01e+10      0.000   -2.36e-10   -2.36e-10
x16        -1.523e-10   9.42e-21  -1.62e+10      0.000   -1.52e-10   -1.52e-10
x17        -1.491e-10   9.33e-21   -1.6e+10      0.000   -1.49e-10   -1.49e-10
x18        -6.404e-10   1.93e-20  -3.32e+10      0.000    -6.4e-10    -6.4e-10
x19        -2.596e-10   1.23e-20  -2.11e+10      0.000    -2.6e-10    -2.6e-10
x20        -6.246e-10   1.91e-20  -3.28e+10      0.000   -6.25e-10   -6.25e-10
x21        -1.953e-09   2.16e-20  -9.04e+10      0.000   -1.95e-09   -1.95e-09
ar.L1         -0.4914   1.46e-22  -3.38e+21      0.000      -0.491      -0.491
ar.L2         -0.1934   8.48e-23  -2.28e+21      0.000      -0.193      -0.193
ar.L3         -0.0491    4.2e-23  -1.17e+21      0.000      -0.049      -0.049
ma.L1         -0.7092   3.33e-22  -2.13e+21      0.000      -0.709      -0.709
sigma2       8.99e-11   6.95e-11      1.293      0.196   -4.64e-11    2.26e-10
===================================================================================
Ljung-Box (L1) (Q):                  32.51   Jarque-Bera (JB):             49038.38
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.35   Skew:                             1.06
Prob(H) (two-sided):                  0.00   Kurtosis:                        41.18
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 1.71e+40. Standard errors may be unstable.
ARIMA order: (3, 3, 1) 

WARNING:tensorflow:Layer lstm_22 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.58849, saving model to LSTM5.h5
58/58 - 2s - loss: 0.2678 - val_loss: 0.5885 - lr: 0.0010 - 2s/epoch - 32ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.58849
58/58 - 0s - loss: 0.2903 - val_loss: 2.1962 - lr: 0.0010 - 431ms/epoch - 7ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.58849
58/58 - 0s - loss: 0.0638 - val_loss: 0.7741 - lr: 0.0010 - 409ms/epoch - 7ms/step
Epoch 4/500

Epoch 00004: val_loss improved from 0.58849 to 0.30876, saving model to LSTM5.h5
58/58 - 0s - loss: 0.0665 - val_loss: 0.3088 - lr: 0.0010 - 444ms/epoch - 8ms/step
Epoch 5/500

Epoch 00005: val_loss improved from 0.30876 to 0.13512, saving model to LSTM5.h5
58/58 - 0s - loss: 0.0457 - val_loss: 0.1351 - lr: 0.0010 - 432ms/epoch - 7ms/step
Epoch 6/500

Epoch 00006: val_loss improved from 0.13512 to 0.06248, saving model to LSTM5.h5
58/58 - 0s - loss: 0.0390 - val_loss: 0.0625 - lr: 0.0010 - 417ms/epoch - 7ms/step
Epoch 7/500

Epoch 00007: val_loss improved from 0.06248 to 0.03894, saving model to LSTM5.h5
58/58 - 0s - loss: 0.0322 - val_loss: 0.0389 - lr: 0.0010 - 432ms/epoch - 7ms/step
Epoch 8/500

Epoch 00008: val_loss improved from 0.03894 to 0.03035, saving model to LSTM5.h5
58/58 - 0s - loss: 0.0329 - val_loss: 0.0303 - lr: 0.0010 - 429ms/epoch - 7ms/step
Epoch 9/500

Epoch 00009: val_loss improved from 0.03035 to 0.01048, saving model to LSTM5.h5
58/58 - 0s - loss: 0.0310 - val_loss: 0.0105 - lr: 0.0010 - 432ms/epoch - 7ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.01048
58/58 - 0s - loss: 0.0333 - val_loss: 0.0124 - lr: 0.0010 - 437ms/epoch - 8ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.01048
58/58 - 0s - loss: 0.0325 - val_loss: 0.0488 - lr: 0.0010 - 427ms/epoch - 7ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.01048
58/58 - 0s - loss: 0.0389 - val_loss: 0.0348 - lr: 0.0010 - 439ms/epoch - 8ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.01048
58/58 - 0s - loss: 0.0292 - val_loss: 0.0212 - lr: 0.0010 - 407ms/epoch - 7ms/step
Epoch 14/500

Epoch 00014: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00014: val_loss did not improve from 0.01048
58/58 - 0s - loss: 0.0331 - val_loss: 0.0954 - lr: 0.0010 - 430ms/epoch - 7ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.01048
58/58 - 0s - loss: 0.0299 - val_loss: 0.0724 - lr: 1.0000e-04 - 400ms/epoch - 7ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.01048
58/58 - 0s - loss: 0.0231 - val_loss: 0.0653 - lr: 1.0000e-04 - 394ms/epoch - 7ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.01048
58/58 - 0s - loss: 0.0251 - val_loss: 0.0666 - lr: 1.0000e-04 - 431ms/epoch - 7ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.01048
58/58 - 0s - loss: 0.0246 - val_loss: 0.0709 - lr: 1.0000e-04 - 429ms/epoch - 7ms/step
Epoch 19/500

Epoch 00019: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00019: val_loss did not improve from 0.01048
58/58 - 0s - loss: 0.0267 - val_loss: 0.0765 - lr: 1.0000e-04 - 426ms/epoch - 7ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.01048
58/58 - 0s - loss: 0.0249 - val_loss: 0.0769 - lr: 1.0000e-05 - 412ms/epoch - 7ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.01048
58/58 - 0s - loss: 0.0241 - val_loss: 0.0792 - lr: 1.0000e-05 - 450ms/epoch - 8ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.01048
58/58 - 0s - loss: 0.0232 - val_loss: 0.0794 - lr: 1.0000e-05 - 420ms/epoch - 7ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.01048
58/58 - 0s - loss: 0.0243 - val_loss: 0.0797 - lr: 1.0000e-05 - 400ms/epoch - 7ms/step
Epoch 24/500

Epoch 00024: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00024: val_loss did not improve from 0.01048
58/58 - 0s - loss: 0.0244 - val_loss: 0.0789 - lr: 1.0000e-05 - 418ms/epoch - 7ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.01048
58/58 - 0s - loss: 0.0249 - val_loss: 0.0781 - lr: 1.0000e-05 - 425ms/epoch - 7ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.01048
58/58 - 0s - loss: 0.0234 - val_loss: 0.0779 - lr: 1.0000e-05 - 419ms/epoch - 7ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.01048
58/58 - 0s - loss: 0.0221 - val_loss: 0.0786 - lr: 1.0000e-05 - 394ms/epoch - 7ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.01048
58/58 - 0s - loss: 0.0252 - val_loss: 0.0784 - lr: 1.0000e-05 - 442ms/epoch - 8ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.01048
58/58 - 0s - loss: 0.0226 - val_loss: 0.0797 - lr: 1.0000e-05 - 386ms/epoch - 7ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.01048
58/58 - 0s - loss: 0.0226 - val_loss: 0.0810 - lr: 1.0000e-05 - 440ms/epoch - 8ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.01048
58/58 - 0s - loss: 0.0231 - val_loss: 0.0806 - lr: 1.0000e-05 - 421ms/epoch - 7ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.01048
58/58 - 0s - loss: 0.0259 - val_loss: 0.0785 - lr: 1.0000e-05 - 433ms/epoch - 7ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.01048
58/58 - 0s - loss: 0.0246 - val_loss: 0.0774 - lr: 1.0000e-05 - 425ms/epoch - 7ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.01048
58/58 - 0s - loss: 0.0238 - val_loss: 0.0785 - lr: 1.0000e-05 - 413ms/epoch - 7ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.01048
58/58 - 0s - loss: 0.0205 - val_loss: 0.0778 - lr: 1.0000e-05 - 443ms/epoch - 8ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.01048
58/58 - 0s - loss: 0.0248 - val_loss: 0.0806 - lr: 1.0000e-05 - 409ms/epoch - 7ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.01048
58/58 - 0s - loss: 0.0212 - val_loss: 0.0810 - lr: 1.0000e-05 - 403ms/epoch - 7ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.01048
58/58 - 0s - loss: 0.0225 - val_loss: 0.0823 - lr: 1.0000e-05 - 402ms/epoch - 7ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.01048
58/58 - 0s - loss: 0.0240 - val_loss: 0.0832 - lr: 1.0000e-05 - 418ms/epoch - 7ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.01048
58/58 - 0s - loss: 0.0246 - val_loss: 0.0826 - lr: 1.0000e-05 - 412ms/epoch - 7ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.01048
58/58 - 0s - loss: 0.0214 - val_loss: 0.0823 - lr: 1.0000e-05 - 417ms/epoch - 7ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.01048
58/58 - 0s - loss: 0.0248 - val_loss: 0.0821 - lr: 1.0000e-05 - 401ms/epoch - 7ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.01048
58/58 - 0s - loss: 0.0231 - val_loss: 0.0841 - lr: 1.0000e-05 - 398ms/epoch - 7ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.01048
58/58 - 0s - loss: 0.0239 - val_loss: 0.0841 - lr: 1.0000e-05 - 406ms/epoch - 7ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.01048
58/58 - 0s - loss: 0.0232 - val_loss: 0.0856 - lr: 1.0000e-05 - 449ms/epoch - 8ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.01048
58/58 - 0s - loss: 0.0259 - val_loss: 0.0860 - lr: 1.0000e-05 - 434ms/epoch - 7ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.01048
58/58 - 0s - loss: 0.0229 - val_loss: 0.0873 - lr: 1.0000e-05 - 425ms/epoch - 7ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.01048
58/58 - 0s - loss: 0.0234 - val_loss: 0.0898 - lr: 1.0000e-05 - 402ms/epoch - 7ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.01048
58/58 - 0s - loss: 0.0221 - val_loss: 0.0903 - lr: 1.0000e-05 - 442ms/epoch - 8ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.01048
58/58 - 0s - loss: 0.0222 - val_loss: 0.0909 - lr: 1.0000e-05 - 444ms/epoch - 8ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.01048
58/58 - 0s - loss: 0.0220 - val_loss: 0.0907 - lr: 1.0000e-05 - 418ms/epoch - 7ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.01048
58/58 - 0s - loss: 0.0205 - val_loss: 0.0895 - lr: 1.0000e-05 - 423ms/epoch - 7ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.01048
58/58 - 0s - loss: 0.0212 - val_loss: 0.0882 - lr: 1.0000e-05 - 409ms/epoch - 7ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.01048
58/58 - 0s - loss: 0.0238 - val_loss: 0.0899 - lr: 1.0000e-05 - 411ms/epoch - 7ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.01048
58/58 - 0s - loss: 0.0205 - val_loss: 0.0885 - lr: 1.0000e-05 - 415ms/epoch - 7ms/step
Epoch 56/500

Epoch 00056: val_loss did not improve from 0.01048
58/58 - 0s - loss: 0.0222 - val_loss: 0.0884 - lr: 1.0000e-05 - 407ms/epoch - 7ms/step
Epoch 57/500

Epoch 00057: val_loss did not improve from 0.01048
58/58 - 0s - loss: 0.0239 - val_loss: 0.0912 - lr: 1.0000e-05 - 410ms/epoch - 7ms/step
Epoch 58/500

Epoch 00058: val_loss did not improve from 0.01048
58/58 - 0s - loss: 0.0228 - val_loss: 0.0936 - lr: 1.0000e-05 - 429ms/epoch - 7ms/step
Epoch 59/500

Epoch 00059: val_loss did not improve from 0.01048
58/58 - 0s - loss: 0.0235 - val_loss: 0.0936 - lr: 1.0000e-05 - 449ms/epoch - 8ms/step
Epoch 00059: early stopping
SMA
Prediction vs Close:		51.49% Accuracy
Prediction vs Prediction:	48.13% Accuracy
MSE:	 38.984836670221576 
RMSE:	 6.24378384236847 
MAPE:	 5.10393861237253

EMA
Prediction vs Close:		55.22% Accuracy
Prediction vs Prediction:	46.27% Accuracy
MSE:	 124.25764226707467 
RMSE:	 11.1470912020614 
MAPE:	 9.17724208981177

WMA
Prediction vs Close:		54.48% Accuracy
Prediction vs Prediction:	43.66% Accuracy
MSE:	 38.21405797999008 
RMSE:	 6.1817520154071275 
MAPE:	 5.0592557753421294

DEMA
Prediction vs Close:		51.87% Accuracy
Prediction vs Prediction:	42.54% Accuracy
MSE:	 272.80520892035037 
RMSE:	 16.51681594376926 
MAPE:	 15.690440427295842

KAMA
Prediction vs Close:		55.6% Accuracy
Prediction vs Prediction:	45.9% Accuracy
MSE:	 39.855043502438484 
RMSE:	 6.313085101789654 
MAPE:	 4.932118299391016

MIDPOINT
Prediction vs Close:		51.87% Accuracy
Prediction vs Prediction:	47.39% Accuracy
MSE:	 247.8059617737088 
RMSE:	 15.741853822650901 
MAPE:	 13.137429929578502
T3
T3([input_arrays], [timeperiod=5], [vfactor=0.7])

Triple Exponential Moving Average (T3) (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 5
    vfactor: 0.7
Outputs:
    real
19

Working on T3 predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-16569.270, Time=2.41 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-14511.291, Time=2.44 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-15408.738, Time=7.86 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-15165.005, Time=7.88 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=-15595.465, Time=7.24 sec
 ARIMA(1,3,2)(0,0,0)[0]             : AIC=-15837.470, Time=9.75 sec
 ARIMA(0,3,2)(0,0,0)[0]             : AIC=-15491.538, Time=9.88 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=-16378.438, Time=2.47 sec
/usr/local/lib/python3.7/dist-packages/statsmodels/tsa/statespace/sarimax.py:1906: RuntimeWarning: divide by zero encountered in reciprocal
  return np.roots(self.polynomial_reduced_ma)**-1
 ARIMA(2,3,2)(0,0,0)[0]             : AIC=-16318.604, Time=3.35 sec
 ARIMA(1,3,1)(0,0,0)[0] intercept   : AIC=-16567.270, Time=2.21 sec

Best model:  ARIMA(1,3,1)(0,0,0)[0]          
Total fit time: 55.514 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(1, 3, 1)   Log Likelihood                8308.635
Date:                Sun, 12 Dec 2021   AIC                         -16569.270
Time:                        14:00:45   BIC                         -16456.690
Sample:                             0   HQIC                        -16526.035
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1          1.355e-13   3.43e-05   3.95e-09      1.000   -6.72e-05    6.72e-05
x2          5.009e-14   2.67e-05   1.88e-09      1.000   -5.23e-05    5.23e-05
x3         -9.101e-15   2.19e-05  -4.16e-10      1.000   -4.29e-05    4.29e-05
x4             1.0000   2.97e-05   3.37e+04      0.000       1.000       1.000
x5          3.626e-12    3.2e-05   1.13e-07      1.000   -6.28e-05    6.28e-05
x6          6.879e-17      0.000   5.13e-13      1.000      -0.000       0.000
x7          1.588e-13   4.04e-05   3.93e-09      1.000   -7.92e-05    7.92e-05
x8            -0.0002   9.77e-06    -20.395      0.000      -0.000      -0.000
x9          3.877e-14      0.001   6.24e-11      1.000      -0.001       0.001
x10         -7.41e-05      0.001     -0.129      0.897      -0.001       0.001
x11            0.0003   4.91e-05      6.926      0.000       0.000       0.000
x12           -0.0004   7.27e-05     -5.556      0.000      -0.001      -0.000
x13        -2.679e-14   3.39e-05   -7.9e-10      1.000   -6.65e-05    6.65e-05
x14          2.97e-13      0.000   2.31e-09      1.000      -0.000       0.000
x15         1.602e-12   7.47e-05   2.14e-08      1.000      -0.000       0.000
x16        -8.756e-13   4.29e-05  -2.04e-08      1.000   -8.41e-05    8.41e-05
x17         1.793e-12   6.56e-05   2.74e-08      1.000      -0.000       0.000
x18        -1.019e-13      0.000  -5.54e-10      1.000      -0.000       0.000
x19        -1.077e-12   8.29e-05   -1.3e-08      1.000      -0.000       0.000
x20         1.771e-13   8.45e-05    2.1e-09      1.000      -0.000       0.000
x21         9.233e-16      0.000   1.94e-12      1.000      -0.001       0.001
ar.L1         -0.2857      0.000  -2747.572      0.000      -0.286      -0.285
ma.L1         -0.9142   7.12e-06  -1.28e+05      0.000      -0.914      -0.914
sigma2          1e-10      7e-11      1.429      0.153   -3.71e-11    2.37e-10
===================================================================================
Ljung-Box (L1) (Q):                  84.32   Jarque-Bera (JB):           4804295.53
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                             5.87
Prob(H) (two-sided):                  0.00   Kurtosis:                       381.28
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 2.06e+20. Standard errors may be unstable.
ARIMA order: (1, 3, 1) 

WARNING:tensorflow:Layer lstm_23 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.16122, saving model to LSTM5.h5
43/43 - 2s - loss: 0.2956 - val_loss: 0.1612 - lr: 0.0010 - 2s/epoch - 42ms/step
Epoch 2/500

Epoch 00002: val_loss improved from 0.16122 to 0.02401, saving model to LSTM5.h5
43/43 - 0s - loss: 0.0668 - val_loss: 0.0240 - lr: 0.0010 - 333ms/epoch - 8ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.02401
43/43 - 0s - loss: 0.0778 - val_loss: 0.1607 - lr: 0.0010 - 315ms/epoch - 7ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.02401
43/43 - 0s - loss: 0.0580 - val_loss: 0.4810 - lr: 0.0010 - 296ms/epoch - 7ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.02401
43/43 - 0s - loss: 0.0439 - val_loss: 0.0782 - lr: 0.0010 - 319ms/epoch - 7ms/step
Epoch 6/500

Epoch 00006: val_loss improved from 0.02401 to 0.01150, saving model to LSTM5.h5
43/43 - 0s - loss: 0.0390 - val_loss: 0.0115 - lr: 0.0010 - 333ms/epoch - 8ms/step
Epoch 7/500

Epoch 00007: val_loss improved from 0.01150 to 0.00908, saving model to LSTM5.h5
43/43 - 0s - loss: 0.0450 - val_loss: 0.0091 - lr: 0.0010 - 345ms/epoch - 8ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.00908
43/43 - 0s - loss: 0.0358 - val_loss: 0.0364 - lr: 0.0010 - 327ms/epoch - 8ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.00908
43/43 - 0s - loss: 0.0299 - val_loss: 0.0156 - lr: 0.0010 - 333ms/epoch - 8ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.00908
43/43 - 0s - loss: 0.0331 - val_loss: 0.0099 - lr: 0.0010 - 312ms/epoch - 7ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.00908
43/43 - 0s - loss: 0.0290 - val_loss: 0.0154 - lr: 0.0010 - 344ms/epoch - 8ms/step
Epoch 12/500

Epoch 00012: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00012: val_loss did not improve from 0.00908
43/43 - 0s - loss: 0.0308 - val_loss: 0.1072 - lr: 0.0010 - 321ms/epoch - 7ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.00908
43/43 - 0s - loss: 0.0293 - val_loss: 0.0905 - lr: 1.0000e-04 - 305ms/epoch - 7ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.00908
43/43 - 0s - loss: 0.0274 - val_loss: 0.0716 - lr: 1.0000e-04 - 317ms/epoch - 7ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.00908
43/43 - 0s - loss: 0.0280 - val_loss: 0.0534 - lr: 1.0000e-04 - 303ms/epoch - 7ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.00908
43/43 - 0s - loss: 0.0279 - val_loss: 0.0413 - lr: 1.0000e-04 - 308ms/epoch - 7ms/step
Epoch 17/500

Epoch 00017: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00017: val_loss did not improve from 0.00908
43/43 - 0s - loss: 0.0267 - val_loss: 0.0300 - lr: 1.0000e-04 - 339ms/epoch - 8ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.00908
43/43 - 0s - loss: 0.0270 - val_loss: 0.0291 - lr: 1.0000e-05 - 300ms/epoch - 7ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.00908
43/43 - 0s - loss: 0.0269 - val_loss: 0.0282 - lr: 1.0000e-05 - 300ms/epoch - 7ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.00908
43/43 - 0s - loss: 0.0254 - val_loss: 0.0272 - lr: 1.0000e-05 - 309ms/epoch - 7ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.00908
43/43 - 0s - loss: 0.0274 - val_loss: 0.0262 - lr: 1.0000e-05 - 300ms/epoch - 7ms/step
Epoch 22/500

Epoch 00022: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00022: val_loss did not improve from 0.00908
43/43 - 0s - loss: 0.0267 - val_loss: 0.0252 - lr: 1.0000e-05 - 298ms/epoch - 7ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.00908
43/43 - 0s - loss: 0.0241 - val_loss: 0.0243 - lr: 1.0000e-05 - 317ms/epoch - 7ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.00908
43/43 - 0s - loss: 0.0258 - val_loss: 0.0235 - lr: 1.0000e-05 - 346ms/epoch - 8ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.00908
43/43 - 0s - loss: 0.0274 - val_loss: 0.0226 - lr: 1.0000e-05 - 305ms/epoch - 7ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.00908
43/43 - 0s - loss: 0.0269 - val_loss: 0.0217 - lr: 1.0000e-05 - 302ms/epoch - 7ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.00908
43/43 - 0s - loss: 0.0255 - val_loss: 0.0207 - lr: 1.0000e-05 - 310ms/epoch - 7ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.00908
43/43 - 0s - loss: 0.0270 - val_loss: 0.0200 - lr: 1.0000e-05 - 301ms/epoch - 7ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.00908
43/43 - 0s - loss: 0.0254 - val_loss: 0.0192 - lr: 1.0000e-05 - 310ms/epoch - 7ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.00908
43/43 - 0s - loss: 0.0264 - val_loss: 0.0185 - lr: 1.0000e-05 - 305ms/epoch - 7ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.00908
43/43 - 0s - loss: 0.0267 - val_loss: 0.0178 - lr: 1.0000e-05 - 312ms/epoch - 7ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.00908
43/43 - 0s - loss: 0.0251 - val_loss: 0.0174 - lr: 1.0000e-05 - 323ms/epoch - 8ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.00908
43/43 - 0s - loss: 0.0266 - val_loss: 0.0167 - lr: 1.0000e-05 - 317ms/epoch - 7ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.00908
43/43 - 0s - loss: 0.0249 - val_loss: 0.0160 - lr: 1.0000e-05 - 332ms/epoch - 8ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.00908
43/43 - 0s - loss: 0.0261 - val_loss: 0.0155 - lr: 1.0000e-05 - 315ms/epoch - 7ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.00908
43/43 - 0s - loss: 0.0269 - val_loss: 0.0149 - lr: 1.0000e-05 - 319ms/epoch - 7ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.00908
43/43 - 0s - loss: 0.0244 - val_loss: 0.0144 - lr: 1.0000e-05 - 321ms/epoch - 7ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.00908
43/43 - 0s - loss: 0.0261 - val_loss: 0.0139 - lr: 1.0000e-05 - 331ms/epoch - 8ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.00908
43/43 - 0s - loss: 0.0233 - val_loss: 0.0136 - lr: 1.0000e-05 - 306ms/epoch - 7ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.00908
43/43 - 0s - loss: 0.0248 - val_loss: 0.0132 - lr: 1.0000e-05 - 333ms/epoch - 8ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.00908
43/43 - 0s - loss: 0.0272 - val_loss: 0.0127 - lr: 1.0000e-05 - 347ms/epoch - 8ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.00908
43/43 - 0s - loss: 0.0263 - val_loss: 0.0122 - lr: 1.0000e-05 - 306ms/epoch - 7ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.00908
43/43 - 0s - loss: 0.0259 - val_loss: 0.0117 - lr: 1.0000e-05 - 325ms/epoch - 8ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.00908
43/43 - 0s - loss: 0.0245 - val_loss: 0.0111 - lr: 1.0000e-05 - 312ms/epoch - 7ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.00908
43/43 - 0s - loss: 0.0265 - val_loss: 0.0107 - lr: 1.0000e-05 - 299ms/epoch - 7ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.00908
43/43 - 0s - loss: 0.0248 - val_loss: 0.0104 - lr: 1.0000e-05 - 333ms/epoch - 8ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.00908
43/43 - 0s - loss: 0.0276 - val_loss: 0.0102 - lr: 1.0000e-05 - 316ms/epoch - 7ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.00908
43/43 - 0s - loss: 0.0274 - val_loss: 0.0098 - lr: 1.0000e-05 - 356ms/epoch - 8ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.00908
43/43 - 0s - loss: 0.0242 - val_loss: 0.0094 - lr: 1.0000e-05 - 313ms/epoch - 7ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.00908
43/43 - 0s - loss: 0.0258 - val_loss: 0.0092 - lr: 1.0000e-05 - 329ms/epoch - 8ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.00908
43/43 - 0s - loss: 0.0251 - val_loss: 0.0092 - lr: 1.0000e-05 - 326ms/epoch - 8ms/step
Epoch 52/500

Epoch 00052: val_loss improved from 0.00908 to 0.00904, saving model to LSTM5.h5
43/43 - 1s - loss: 0.0269 - val_loss: 0.0090 - lr: 1.0000e-05 - 528ms/epoch - 12ms/step
Epoch 53/500

Epoch 00053: val_loss improved from 0.00904 to 0.00881, saving model to LSTM5.h5
43/43 - 0s - loss: 0.0265 - val_loss: 0.0088 - lr: 1.0000e-05 - 334ms/epoch - 8ms/step
Epoch 54/500

Epoch 00054: val_loss improved from 0.00881 to 0.00874, saving model to LSTM5.h5
43/43 - 0s - loss: 0.0261 - val_loss: 0.0087 - lr: 1.0000e-05 - 356ms/epoch - 8ms/step
Epoch 55/500

Epoch 00055: val_loss improved from 0.00874 to 0.00866, saving model to LSTM5.h5
43/43 - 0s - loss: 0.0257 - val_loss: 0.0087 - lr: 1.0000e-05 - 343ms/epoch - 8ms/step
Epoch 56/500

Epoch 00056: val_loss improved from 0.00866 to 0.00844, saving model to LSTM5.h5
43/43 - 0s - loss: 0.0222 - val_loss: 0.0084 - lr: 1.0000e-05 - 365ms/epoch - 8ms/step
Epoch 57/500

Epoch 00057: val_loss improved from 0.00844 to 0.00830, saving model to LSTM5.h5
43/43 - 0s - loss: 0.0230 - val_loss: 0.0083 - lr: 1.0000e-05 - 322ms/epoch - 7ms/step
Epoch 58/500

Epoch 00058: val_loss improved from 0.00830 to 0.00809, saving model to LSTM5.h5
43/43 - 0s - loss: 0.0262 - val_loss: 0.0081 - lr: 1.0000e-05 - 353ms/epoch - 8ms/step
Epoch 59/500

Epoch 00059: val_loss improved from 0.00809 to 0.00801, saving model to LSTM5.h5
43/43 - 0s - loss: 0.0251 - val_loss: 0.0080 - lr: 1.0000e-05 - 344ms/epoch - 8ms/step
Epoch 60/500

Epoch 00060: val_loss improved from 0.00801 to 0.00799, saving model to LSTM5.h5
43/43 - 0s - loss: 0.0233 - val_loss: 0.0080 - lr: 1.0000e-05 - 339ms/epoch - 8ms/step
Epoch 61/500

Epoch 00061: val_loss did not improve from 0.00799
43/43 - 0s - loss: 0.0235 - val_loss: 0.0080 - lr: 1.0000e-05 - 349ms/epoch - 8ms/step
Epoch 62/500

Epoch 00062: val_loss improved from 0.00799 to 0.00799, saving model to LSTM5.h5
43/43 - 0s - loss: 0.0249 - val_loss: 0.0080 - lr: 1.0000e-05 - 357ms/epoch - 8ms/step
Epoch 63/500

Epoch 00063: val_loss improved from 0.00799 to 0.00797, saving model to LSTM5.h5
43/43 - 0s - loss: 0.0250 - val_loss: 0.0080 - lr: 1.0000e-05 - 349ms/epoch - 8ms/step
Epoch 64/500

Epoch 00064: val_loss improved from 0.00797 to 0.00788, saving model to LSTM5.h5
43/43 - 0s - loss: 0.0255 - val_loss: 0.0079 - lr: 1.0000e-05 - 354ms/epoch - 8ms/step
Epoch 65/500

Epoch 00065: val_loss improved from 0.00788 to 0.00785, saving model to LSTM5.h5
43/43 - 0s - loss: 0.0252 - val_loss: 0.0079 - lr: 1.0000e-05 - 346ms/epoch - 8ms/step
Epoch 66/500

Epoch 00066: val_loss improved from 0.00785 to 0.00785, saving model to LSTM5.h5
43/43 - 0s - loss: 0.0241 - val_loss: 0.0078 - lr: 1.0000e-05 - 368ms/epoch - 9ms/step
Epoch 67/500

Epoch 00067: val_loss improved from 0.00785 to 0.00784, saving model to LSTM5.h5
43/43 - 0s - loss: 0.0223 - val_loss: 0.0078 - lr: 1.0000e-05 - 374ms/epoch - 9ms/step
Epoch 68/500

Epoch 00068: val_loss did not improve from 0.00784
43/43 - 0s - loss: 0.0253 - val_loss: 0.0079 - lr: 1.0000e-05 - 320ms/epoch - 7ms/step
Epoch 69/500

Epoch 00069: val_loss did not improve from 0.00784
43/43 - 0s - loss: 0.0263 - val_loss: 0.0079 - lr: 1.0000e-05 - 332ms/epoch - 8ms/step
Epoch 70/500

Epoch 00070: val_loss did not improve from 0.00784
43/43 - 0s - loss: 0.0258 - val_loss: 0.0080 - lr: 1.0000e-05 - 344ms/epoch - 8ms/step
Epoch 71/500

Epoch 00071: val_loss did not improve from 0.00784
43/43 - 0s - loss: 0.0249 - val_loss: 0.0081 - lr: 1.0000e-05 - 313ms/epoch - 7ms/step
Epoch 72/500

Epoch 00072: val_loss did not improve from 0.00784
43/43 - 0s - loss: 0.0263 - val_loss: 0.0081 - lr: 1.0000e-05 - 314ms/epoch - 7ms/step
Epoch 73/500

Epoch 00073: val_loss did not improve from 0.00784
43/43 - 0s - loss: 0.0250 - val_loss: 0.0081 - lr: 1.0000e-05 - 339ms/epoch - 8ms/step
Epoch 74/500

Epoch 00074: val_loss did not improve from 0.00784
43/43 - 0s - loss: 0.0259 - val_loss: 0.0082 - lr: 1.0000e-05 - 325ms/epoch - 8ms/step
Epoch 75/500

Epoch 00075: val_loss did not improve from 0.00784
43/43 - 0s - loss: 0.0256 - val_loss: 0.0083 - lr: 1.0000e-05 - 321ms/epoch - 7ms/step
Epoch 76/500

Epoch 00076: val_loss did not improve from 0.00784
43/43 - 0s - loss: 0.0229 - val_loss: 0.0083 - lr: 1.0000e-05 - 331ms/epoch - 8ms/step
Epoch 77/500

Epoch 00077: val_loss did not improve from 0.00784
43/43 - 0s - loss: 0.0228 - val_loss: 0.0083 - lr: 1.0000e-05 - 306ms/epoch - 7ms/step
Epoch 78/500

Epoch 00078: val_loss did not improve from 0.00784
43/43 - 0s - loss: 0.0237 - val_loss: 0.0085 - lr: 1.0000e-05 - 307ms/epoch - 7ms/step
Epoch 79/500

Epoch 00079: val_loss did not improve from 0.00784
43/43 - 0s - loss: 0.0259 - val_loss: 0.0086 - lr: 1.0000e-05 - 319ms/epoch - 7ms/step
Epoch 80/500

Epoch 00080: val_loss did not improve from 0.00784
43/43 - 0s - loss: 0.0255 - val_loss: 0.0087 - lr: 1.0000e-05 - 313ms/epoch - 7ms/step
Epoch 81/500

Epoch 00081: val_loss did not improve from 0.00784
43/43 - 0s - loss: 0.0241 - val_loss: 0.0085 - lr: 1.0000e-05 - 314ms/epoch - 7ms/step
Epoch 82/500

Epoch 00082: val_loss did not improve from 0.00784
43/43 - 0s - loss: 0.0238 - val_loss: 0.0084 - lr: 1.0000e-05 - 336ms/epoch - 8ms/step
Epoch 83/500

Epoch 00083: val_loss did not improve from 0.00784
43/43 - 0s - loss: 0.0242 - val_loss: 0.0082 - lr: 1.0000e-05 - 371ms/epoch - 9ms/step
Epoch 84/500

Epoch 00084: val_loss did not improve from 0.00784
43/43 - 0s - loss: 0.0265 - val_loss: 0.0082 - lr: 1.0000e-05 - 354ms/epoch - 8ms/step
Epoch 85/500

Epoch 00085: val_loss did not improve from 0.00784
43/43 - 0s - loss: 0.0250 - val_loss: 0.0083 - lr: 1.0000e-05 - 335ms/epoch - 8ms/step
Epoch 86/500

Epoch 00086: val_loss did not improve from 0.00784
43/43 - 0s - loss: 0.0272 - val_loss: 0.0084 - lr: 1.0000e-05 - 337ms/epoch - 8ms/step
Epoch 87/500

Epoch 00087: val_loss did not improve from 0.00784
43/43 - 0s - loss: 0.0254 - val_loss: 0.0085 - lr: 1.0000e-05 - 342ms/epoch - 8ms/step
Epoch 88/500

Epoch 00088: val_loss did not improve from 0.00784
43/43 - 0s - loss: 0.0257 - val_loss: 0.0086 - lr: 1.0000e-05 - 324ms/epoch - 8ms/step
Epoch 89/500

Epoch 00089: val_loss did not improve from 0.00784
43/43 - 0s - loss: 0.0254 - val_loss: 0.0086 - lr: 1.0000e-05 - 326ms/epoch - 8ms/step
Epoch 90/500

Epoch 00090: val_loss did not improve from 0.00784
43/43 - 0s - loss: 0.0218 - val_loss: 0.0087 - lr: 1.0000e-05 - 310ms/epoch - 7ms/step
Epoch 91/500

Epoch 00091: val_loss did not improve from 0.00784
43/43 - 0s - loss: 0.0226 - val_loss: 0.0089 - lr: 1.0000e-05 - 335ms/epoch - 8ms/step
Epoch 92/500

Epoch 00092: val_loss did not improve from 0.00784
43/43 - 0s - loss: 0.0240 - val_loss: 0.0088 - lr: 1.0000e-05 - 311ms/epoch - 7ms/step
Epoch 93/500

Epoch 00093: val_loss did not improve from 0.00784
43/43 - 0s - loss: 0.0248 - val_loss: 0.0087 - lr: 1.0000e-05 - 348ms/epoch - 8ms/step
Epoch 94/500

Epoch 00094: val_loss did not improve from 0.00784
43/43 - 0s - loss: 0.0228 - val_loss: 0.0084 - lr: 1.0000e-05 - 360ms/epoch - 8ms/step
Epoch 95/500

Epoch 00095: val_loss did not improve from 0.00784
43/43 - 0s - loss: 0.0239 - val_loss: 0.0083 - lr: 1.0000e-05 - 333ms/epoch - 8ms/step
Epoch 96/500

Epoch 00096: val_loss did not improve from 0.00784
43/43 - 0s - loss: 0.0223 - val_loss: 0.0083 - lr: 1.0000e-05 - 338ms/epoch - 8ms/step
Epoch 97/500

Epoch 00097: val_loss did not improve from 0.00784
43/43 - 0s - loss: 0.0270 - val_loss: 0.0082 - lr: 1.0000e-05 - 350ms/epoch - 8ms/step
Epoch 98/500

Epoch 00098: val_loss did not improve from 0.00784
43/43 - 0s - loss: 0.0229 - val_loss: 0.0084 - lr: 1.0000e-05 - 302ms/epoch - 7ms/step
Epoch 99/500

Epoch 00099: val_loss did not improve from 0.00784
43/43 - 0s - loss: 0.0272 - val_loss: 0.0084 - lr: 1.0000e-05 - 363ms/epoch - 8ms/step
Epoch 100/500

Epoch 00100: val_loss did not improve from 0.00784
43/43 - 0s - loss: 0.0250 - val_loss: 0.0086 - lr: 1.0000e-05 - 338ms/epoch - 8ms/step
Epoch 101/500

Epoch 00101: val_loss did not improve from 0.00784
43/43 - 0s - loss: 0.0249 - val_loss: 0.0087 - lr: 1.0000e-05 - 329ms/epoch - 8ms/step
Epoch 102/500

Epoch 00102: val_loss did not improve from 0.00784
43/43 - 0s - loss: 0.0250 - val_loss: 0.0083 - lr: 1.0000e-05 - 305ms/epoch - 7ms/step
Epoch 103/500

Epoch 00103: val_loss did not improve from 0.00784
43/43 - 0s - loss: 0.0273 - val_loss: 0.0082 - lr: 1.0000e-05 - 341ms/epoch - 8ms/step
Epoch 104/500

Epoch 00104: val_loss did not improve from 0.00784
43/43 - 0s - loss: 0.0241 - val_loss: 0.0083 - lr: 1.0000e-05 - 298ms/epoch - 7ms/step
Epoch 105/500

Epoch 00105: val_loss did not improve from 0.00784
43/43 - 0s - loss: 0.0234 - val_loss: 0.0083 - lr: 1.0000e-05 - 334ms/epoch - 8ms/step
Epoch 106/500

Epoch 00106: val_loss did not improve from 0.00784
43/43 - 0s - loss: 0.0241 - val_loss: 0.0084 - lr: 1.0000e-05 - 347ms/epoch - 8ms/step
Epoch 107/500

Epoch 00107: val_loss did not improve from 0.00784
43/43 - 0s - loss: 0.0256 - val_loss: 0.0083 - lr: 1.0000e-05 - 317ms/epoch - 7ms/step
Epoch 108/500

Epoch 00108: val_loss did not improve from 0.00784
43/43 - 0s - loss: 0.0228 - val_loss: 0.0085 - lr: 1.0000e-05 - 325ms/epoch - 8ms/step
Epoch 109/500

Epoch 00109: val_loss did not improve from 0.00784
43/43 - 0s - loss: 0.0241 - val_loss: 0.0083 - lr: 1.0000e-05 - 350ms/epoch - 8ms/step
Epoch 110/500

Epoch 00110: val_loss did not improve from 0.00784
43/43 - 0s - loss: 0.0251 - val_loss: 0.0085 - lr: 1.0000e-05 - 308ms/epoch - 7ms/step
Epoch 111/500

Epoch 00111: val_loss did not improve from 0.00784
43/43 - 0s - loss: 0.0242 - val_loss: 0.0085 - lr: 1.0000e-05 - 312ms/epoch - 7ms/step
Epoch 112/500

Epoch 00112: val_loss did not improve from 0.00784
43/43 - 0s - loss: 0.0226 - val_loss: 0.0084 - lr: 1.0000e-05 - 334ms/epoch - 8ms/step
Epoch 113/500

Epoch 00113: val_loss did not improve from 0.00784
43/43 - 0s - loss: 0.0255 - val_loss: 0.0085 - lr: 1.0000e-05 - 323ms/epoch - 8ms/step
Epoch 114/500

Epoch 00114: val_loss did not improve from 0.00784
43/43 - 0s - loss: 0.0232 - val_loss: 0.0084 - lr: 1.0000e-05 - 314ms/epoch - 7ms/step
Epoch 115/500

Epoch 00115: val_loss did not improve from 0.00784
43/43 - 0s - loss: 0.0253 - val_loss: 0.0085 - lr: 1.0000e-05 - 333ms/epoch - 8ms/step
Epoch 116/500

Epoch 00116: val_loss did not improve from 0.00784
43/43 - 0s - loss: 0.0244 - val_loss: 0.0086 - lr: 1.0000e-05 - 353ms/epoch - 8ms/step
Epoch 117/500

Epoch 00117: val_loss did not improve from 0.00784
43/43 - 0s - loss: 0.0213 - val_loss: 0.0088 - lr: 1.0000e-05 - 320ms/epoch - 7ms/step
Epoch 00117: early stopping
SMA
Prediction vs Close:		51.49% Accuracy
Prediction vs Prediction:	48.13% Accuracy
MSE:	 38.984836670221576 
RMSE:	 6.24378384236847 
MAPE:	 5.10393861237253

EMA
Prediction vs Close:		55.22% Accuracy
Prediction vs Prediction:	46.27% Accuracy
MSE:	 124.25764226707467 
RMSE:	 11.1470912020614 
MAPE:	 9.17724208981177

WMA
Prediction vs Close:		54.48% Accuracy
Prediction vs Prediction:	43.66% Accuracy
MSE:	 38.21405797999008 
RMSE:	 6.1817520154071275 
MAPE:	 5.0592557753421294

DEMA
Prediction vs Close:		51.87% Accuracy
Prediction vs Prediction:	42.54% Accuracy
MSE:	 272.80520892035037 
RMSE:	 16.51681594376926 
MAPE:	 15.690440427295842

KAMA
Prediction vs Close:		55.6% Accuracy
Prediction vs Prediction:	45.9% Accuracy
MSE:	 39.855043502438484 
RMSE:	 6.313085101789654 
MAPE:	 4.932118299391016

MIDPOINT
Prediction vs Close:		51.87% Accuracy
Prediction vs Prediction:	47.39% Accuracy
MSE:	 247.8059617737088 
RMSE:	 15.741853822650901 
MAPE:	 13.137429929578502

T3
Prediction vs Close:		54.85% Accuracy
Prediction vs Prediction:	42.91% Accuracy
MSE:	 210.84512917819418 
RMSE:	 14.520507194247527 
MAPE:	 11.877711162377306
TEMA
TEMA([input_arrays], [timeperiod=30])

Triple Exponential Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
9

Working on TEMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-16493.570, Time=2.44 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-15527.581, Time=7.40 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-16154.477, Time=7.16 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-15134.948, Time=7.03 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=-16538.454, Time=8.41 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=-16271.346, Time=2.22 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=-16350.992, Time=12.88 sec
/usr/local/lib/python3.7/dist-packages/statsmodels/tsa/statespace/sarimax.py:1906: RuntimeWarning: divide by zero encountered in reciprocal
  return np.roots(self.polynomial_reduced_ma)**-1
 ARIMA(2,3,2)(0,0,0)[0]             : AIC=-16200.149, Time=3.37 sec
 ARIMA(1,3,2)(0,0,0)[0]             : AIC=-16461.809, Time=15.31 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=-16384.147, Time=3.25 sec
 ARIMA(3,3,2)(0,0,0)[0]             : AIC=inf, Time=10.13 sec
 ARIMA(2,3,1)(0,0,0)[0] intercept   : AIC=-15110.164, Time=5.64 sec

Best model:  ARIMA(2,3,1)(0,0,0)[0]          
Total fit time: 85.264 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(2, 3, 1)   Log Likelihood                8294.227
Date:                Sun, 12 Dec 2021   AIC                         -16538.454
Time:                        14:05:38   BIC                         -16421.183
Sample:                             0   HQIC                        -16493.417
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1          3.591e-07      0.001      0.000      1.000      -0.002       0.002
x2            3.6e-07      0.002      0.000      1.000      -0.003       0.003
x3          3.611e-07      0.001      0.000      1.000      -0.002       0.002
x4             1.0000      0.000   2628.605      0.000       0.999       1.001
x5          3.432e-07      0.000      0.001      0.999      -0.001       0.001
x6          1.714e-07   4.05e-05      0.004      0.997   -7.91e-05    7.95e-05
x7          3.541e-07      0.001      0.000      1.000      -0.003       0.003
x8            -0.0002      0.000     -1.006      0.315      -0.001       0.000
x9         -7.559e-08      0.000     -0.000      1.000      -0.001       0.001
x10            0.0001      0.000      0.492      0.623      -0.000       0.001
x11           -0.0006      0.000     -2.697      0.007      -0.001      -0.000
x12            0.0005      0.000      1.741      0.082   -5.97e-05       0.001
x13           3.6e-07      0.000      0.002      0.999      -0.000       0.000
x14         1.003e-06      0.001      0.001      0.999      -0.002       0.002
x15         3.506e-07   7.16e-05      0.005      0.996      -0.000       0.000
x16         5.157e-07      0.000      0.005      0.996      -0.000       0.000
x17         3.516e-07   6.59e-05      0.005      0.996      -0.000       0.000
x18         1.166e-07      0.000      0.001      1.000      -0.000       0.000
x19         3.922e-07    7.5e-05      0.005      0.996      -0.000       0.000
x20         -3.64e-07      0.000     -0.002      0.999      -0.000       0.000
x21         4.458e-07      0.000      0.004      0.997      -0.000       0.000
ar.L1         -0.4019   4.12e-05  -9758.484      0.000      -0.402      -0.402
ar.L2         -0.1006   1.58e-05  -6360.873      0.000      -0.101      -0.101
ma.L1         -0.7963   8.45e-06  -9.43e+04      0.000      -0.796      -0.796
sigma2      9.048e-11    7.2e-11      1.257      0.209   -5.06e-11    2.32e-10
===================================================================================
Ljung-Box (L1) (Q):                  64.02   Jarque-Bera (JB):           4424775.47
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                             5.53
Prob(H) (two-sided):                  0.00   Kurtosis:                       366.04
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 1.02e+20. Standard errors may be unstable.
ARIMA order: (2, 3, 1) 

WARNING:tensorflow:Layer lstm_24 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.21133, saving model to LSTM5.h5
90/90 - 2s - loss: 0.1634 - val_loss: 0.2113 - lr: 0.0010 - 2s/epoch - 24ms/step
Epoch 2/500

Epoch 00002: val_loss improved from 0.21133 to 0.04083, saving model to LSTM5.h5
90/90 - 1s - loss: 0.2226 - val_loss: 0.0408 - lr: 0.0010 - 666ms/epoch - 7ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.04083
90/90 - 1s - loss: 0.1013 - val_loss: 0.4993 - lr: 0.0010 - 670ms/epoch - 7ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.04083
90/90 - 1s - loss: 0.0733 - val_loss: 0.0470 - lr: 0.0010 - 615ms/epoch - 7ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.04083
90/90 - 1s - loss: 0.0373 - val_loss: 0.2935 - lr: 0.0010 - 659ms/epoch - 7ms/step
Epoch 6/500

Epoch 00006: val_loss did not improve from 0.04083
90/90 - 1s - loss: 0.0306 - val_loss: 0.1003 - lr: 0.0010 - 624ms/epoch - 7ms/step
Epoch 7/500

Epoch 00007: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00007: val_loss did not improve from 0.04083
90/90 - 1s - loss: 0.0359 - val_loss: 0.0651 - lr: 0.0010 - 657ms/epoch - 7ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.04083
90/90 - 1s - loss: 0.0318 - val_loss: 0.0661 - lr: 1.0000e-04 - 670ms/epoch - 7ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.04083
90/90 - 1s - loss: 0.0313 - val_loss: 0.0655 - lr: 1.0000e-04 - 647ms/epoch - 7ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.04083
90/90 - 1s - loss: 0.0285 - val_loss: 0.0612 - lr: 1.0000e-04 - 683ms/epoch - 8ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.04083
90/90 - 1s - loss: 0.0303 - val_loss: 0.0535 - lr: 1.0000e-04 - 662ms/epoch - 7ms/step
Epoch 12/500

Epoch 00012: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00012: val_loss did not improve from 0.04083
90/90 - 1s - loss: 0.0284 - val_loss: 0.0558 - lr: 1.0000e-04 - 625ms/epoch - 7ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.04083
90/90 - 1s - loss: 0.0276 - val_loss: 0.0549 - lr: 1.0000e-05 - 697ms/epoch - 8ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.04083
90/90 - 1s - loss: 0.0295 - val_loss: 0.0555 - lr: 1.0000e-05 - 661ms/epoch - 7ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.04083
90/90 - 1s - loss: 0.0274 - val_loss: 0.0549 - lr: 1.0000e-05 - 660ms/epoch - 7ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.04083
90/90 - 1s - loss: 0.0283 - val_loss: 0.0536 - lr: 1.0000e-05 - 707ms/epoch - 8ms/step
Epoch 17/500

Epoch 00017: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00017: val_loss did not improve from 0.04083
90/90 - 1s - loss: 0.0275 - val_loss: 0.0535 - lr: 1.0000e-05 - 649ms/epoch - 7ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.04083
90/90 - 1s - loss: 0.0259 - val_loss: 0.0544 - lr: 1.0000e-05 - 631ms/epoch - 7ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.04083
90/90 - 1s - loss: 0.0279 - val_loss: 0.0537 - lr: 1.0000e-05 - 678ms/epoch - 8ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.04083
90/90 - 1s - loss: 0.0259 - val_loss: 0.0540 - lr: 1.0000e-05 - 639ms/epoch - 7ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.04083
90/90 - 1s - loss: 0.0259 - val_loss: 0.0538 - lr: 1.0000e-05 - 622ms/epoch - 7ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.04083
90/90 - 1s - loss: 0.0267 - val_loss: 0.0534 - lr: 1.0000e-05 - 655ms/epoch - 7ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.04083
90/90 - 1s - loss: 0.0284 - val_loss: 0.0537 - lr: 1.0000e-05 - 662ms/epoch - 7ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.04083
90/90 - 1s - loss: 0.0279 - val_loss: 0.0538 - lr: 1.0000e-05 - 617ms/epoch - 7ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.04083
90/90 - 1s - loss: 0.0276 - val_loss: 0.0538 - lr: 1.0000e-05 - 704ms/epoch - 8ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.04083
90/90 - 1s - loss: 0.0266 - val_loss: 0.0546 - lr: 1.0000e-05 - 653ms/epoch - 7ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.04083
90/90 - 1s - loss: 0.0267 - val_loss: 0.0544 - lr: 1.0000e-05 - 652ms/epoch - 7ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.04083
90/90 - 1s - loss: 0.0257 - val_loss: 0.0547 - lr: 1.0000e-05 - 623ms/epoch - 7ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.04083
90/90 - 1s - loss: 0.0275 - val_loss: 0.0548 - lr: 1.0000e-05 - 655ms/epoch - 7ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.04083
90/90 - 1s - loss: 0.0275 - val_loss: 0.0541 - lr: 1.0000e-05 - 662ms/epoch - 7ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.04083
90/90 - 1s - loss: 0.0295 - val_loss: 0.0531 - lr: 1.0000e-05 - 680ms/epoch - 8ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.04083
90/90 - 1s - loss: 0.0251 - val_loss: 0.0529 - lr: 1.0000e-05 - 610ms/epoch - 7ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.04083
90/90 - 1s - loss: 0.0247 - val_loss: 0.0529 - lr: 1.0000e-05 - 639ms/epoch - 7ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.04083
90/90 - 1s - loss: 0.0252 - val_loss: 0.0523 - lr: 1.0000e-05 - 681ms/epoch - 8ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.04083
90/90 - 1s - loss: 0.0274 - val_loss: 0.0533 - lr: 1.0000e-05 - 640ms/epoch - 7ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.04083
90/90 - 1s - loss: 0.0256 - val_loss: 0.0519 - lr: 1.0000e-05 - 645ms/epoch - 7ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.04083
90/90 - 1s - loss: 0.0287 - val_loss: 0.0513 - lr: 1.0000e-05 - 663ms/epoch - 7ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.04083
90/90 - 1s - loss: 0.0264 - val_loss: 0.0515 - lr: 1.0000e-05 - 639ms/epoch - 7ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.04083
90/90 - 1s - loss: 0.0270 - val_loss: 0.0501 - lr: 1.0000e-05 - 689ms/epoch - 8ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.04083
90/90 - 1s - loss: 0.0253 - val_loss: 0.0489 - lr: 1.0000e-05 - 649ms/epoch - 7ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.04083
90/90 - 1s - loss: 0.0279 - val_loss: 0.0484 - lr: 1.0000e-05 - 673ms/epoch - 7ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.04083
90/90 - 1s - loss: 0.0268 - val_loss: 0.0494 - lr: 1.0000e-05 - 657ms/epoch - 7ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.04083
90/90 - 1s - loss: 0.0262 - val_loss: 0.0489 - lr: 1.0000e-05 - 664ms/epoch - 7ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.04083
90/90 - 1s - loss: 0.0254 - val_loss: 0.0491 - lr: 1.0000e-05 - 617ms/epoch - 7ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.04083
90/90 - 1s - loss: 0.0237 - val_loss: 0.0488 - lr: 1.0000e-05 - 653ms/epoch - 7ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.04083
90/90 - 1s - loss: 0.0257 - val_loss: 0.0473 - lr: 1.0000e-05 - 657ms/epoch - 7ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.04083
90/90 - 1s - loss: 0.0258 - val_loss: 0.0473 - lr: 1.0000e-05 - 667ms/epoch - 7ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.04083
90/90 - 1s - loss: 0.0269 - val_loss: 0.0478 - lr: 1.0000e-05 - 668ms/epoch - 7ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.04083
90/90 - 1s - loss: 0.0271 - val_loss: 0.0471 - lr: 1.0000e-05 - 626ms/epoch - 7ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.04083
90/90 - 1s - loss: 0.0270 - val_loss: 0.0481 - lr: 1.0000e-05 - 667ms/epoch - 7ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.04083
90/90 - 1s - loss: 0.0242 - val_loss: 0.0491 - lr: 1.0000e-05 - 692ms/epoch - 8ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.04083
90/90 - 1s - loss: 0.0258 - val_loss: 0.0486 - lr: 1.0000e-05 - 665ms/epoch - 7ms/step
Epoch 00052: early stopping
SMA
Prediction vs Close:		51.49% Accuracy
Prediction vs Prediction:	48.13% Accuracy
MSE:	 38.984836670221576 
RMSE:	 6.24378384236847 
MAPE:	 5.10393861237253

EMA
Prediction vs Close:		55.22% Accuracy
Prediction vs Prediction:	46.27% Accuracy
MSE:	 124.25764226707467 
RMSE:	 11.1470912020614 
MAPE:	 9.17724208981177

WMA
Prediction vs Close:		54.48% Accuracy
Prediction vs Prediction:	43.66% Accuracy
MSE:	 38.21405797999008 
RMSE:	 6.1817520154071275 
MAPE:	 5.0592557753421294

DEMA
Prediction vs Close:		51.87% Accuracy
Prediction vs Prediction:	42.54% Accuracy
MSE:	 272.80520892035037 
RMSE:	 16.51681594376926 
MAPE:	 15.690440427295842

KAMA
Prediction vs Close:		55.6% Accuracy
Prediction vs Prediction:	45.9% Accuracy
MSE:	 39.855043502438484 
RMSE:	 6.313085101789654 
MAPE:	 4.932118299391016

MIDPOINT
Prediction vs Close:		51.87% Accuracy
Prediction vs Prediction:	47.39% Accuracy
MSE:	 247.8059617737088 
RMSE:	 15.741853822650901 
MAPE:	 13.137429929578502

T3
Prediction vs Close:		54.85% Accuracy
Prediction vs Prediction:	42.91% Accuracy
MSE:	 210.84512917819418 
RMSE:	 14.520507194247527 
MAPE:	 11.877711162377306

TEMA
Prediction vs Close:		51.87% Accuracy
Prediction vs Prediction:	47.76% Accuracy
MSE:	 47.81525777472662 
RMSE:	 6.9148577552055706 
MAPE:	 5.805355375153735
Runtime: mins: 43.70210928193333

Architecture Used

In [ ]:
from google.colab import files
import cv2
uploaded = files.upload()
Upload widget is only available when the cell has been executed in the current browser session. Please rerun this cell to enable.
Saving Experiment5.png to Experiment5 (1).png
In [ ]:
img = cv2.imread('Experiment5.png')
plt.figure(figsize=(20,10))
plt.axis("off")
plt.title('LSTM Architecture '+imgfile,fontsize=18)
plt.imshow(img)
Out[ ]:
<matplotlib.image.AxesImage at 0x7fda6cb8a950>

Model Plots

In [79]:
with open('simulation5_data.json') as json_file:
    simulation5 = json.load(json_file)
fileimg = 'Experiment5'
In [80]:
for i in range(len(list(simulation5.keys()))):
  SIM = list(simulation5.keys())[i]
  plot_train(simulation5,SIM)
  plot_test(simulation5,SIM)
----- Train RMSE for SMA ----- 7.825688029871665
----- Train_MSE_LSTM for SMA ----- 61.24139314087665
----- Train MAE LSTM for SMA ----- 6.696912122355534
----- Test RMSE for SMA----- 6.24378384236847
----- Test_MSE_LSTM for SMA----- 38.984836670221576
----- Test_MAE_LSTM for SMA----- 5.10393861237253
----- Train RMSE for EMA ----- 9.263860311359515
----- Train_MSE_LSTM for EMA ----- 85.819107868382
----- Train MAE LSTM for EMA ----- 8.131221693493242
----- Test RMSE for EMA----- 11.1470912020614
----- Test_MSE_LSTM for EMA----- 124.25764226707467
----- Test_MAE_LSTM for EMA----- 9.17724208981177
----- Train RMSE for WMA ----- 9.62296255691105
----- Train_MSE_LSTM for WMA ----- 92.60140837171207
----- Train MAE LSTM for WMA ----- 8.447782940991997
----- Test RMSE for WMA----- 6.1817520154071275
----- Test_MSE_LSTM for WMA----- 38.21405797999008
----- Test_MAE_LSTM for WMA----- 5.0592557753421294
----- Train RMSE for DEMA ----- 11.500185279159618
----- Train_MSE_LSTM for DEMA ----- 132.25426145499958
----- Train MAE LSTM for DEMA ----- 10.30295939891055
----- Test RMSE for DEMA----- 16.51681594376926
----- Test_MSE_LSTM for DEMA----- 272.80520892035037
----- Test_MAE_LSTM for DEMA----- 15.690440427295842
----- Train RMSE for KAMA ----- 9.40669400960287
----- Train_MSE_LSTM for KAMA ----- 88.48589219029851
----- Train MAE LSTM for KAMA ----- 8.417607692965897
----- Test RMSE for KAMA----- 6.313085101789654
----- Test_MSE_LSTM for KAMA----- 39.855043502438484
----- Test_MAE_LSTM for KAMA----- 4.932118299391016
----- Train RMSE for MIDPOINT ----- 8.65407178367174
----- Train_MSE_LSTM for MIDPOINT ----- 74.89295843694336
----- Train MAE LSTM for MIDPOINT ----- 7.619016227644343
----- Test RMSE for MIDPOINT----- 15.741853822650901
----- Test_MSE_LSTM for MIDPOINT----- 247.8059617737088
----- Test_MAE_LSTM for MIDPOINT----- 13.137429929578502
----- Train RMSE for T3 ----- 11.015384594469248
----- Train_MSE_LSTM for T3 ----- 121.33869776407043
----- Train MAE LSTM for T3 ----- 9.875190325929681
----- Test RMSE for T3----- 14.520507194247527
----- Test_MSE_LSTM for T3----- 210.84512917819418
----- Test_MAE_LSTM for T3----- 11.877711162377306
----- Train RMSE for TEMA ----- 6.884503600474202
----- Train_MSE_LSTM for TEMA ----- 47.396389824942254
----- Train MAE LSTM for TEMA ----- 4.709048845781707
----- Test RMSE for TEMA----- 6.9148577552055706
----- Test_MSE_LSTM for TEMA----- 47.81525777472662
----- Test_MAE_LSTM for TEMA----- 5.805355375153735

Arima w Exogenous Variable Multistep MutiVariate LSTM Hybrid Model Experiment 6

In [ ]:
def get_arima_exog(dataframe,original_data, train_len, test_len):    
    

    # prepare train and test data for exogenous vr
    X_value = pd.DataFrame(low_vol.iloc[:, :])
    y_value = pd.DataFrame(low_vol.iloc[:, 3])
    X_scaler = MinMaxScaler(feature_range=(-1, 1))
    y_scaler = MinMaxScaler(feature_range=(-1, 1))
    X_scaler.fit(X_value)
    y_scaler.fit(y_value)
    X_scale_dataset = X_scaler.fit_transform(X_value)
    y_scale_dataset = y_scaler.fit_transform(y_value)
    # Get data and check shape
    # X, y, yc = get_X_y(X_scale_dataset, y_scale_dataset)#X will be of shape 224 X 3 X 21 (each 3 X 21 array will be 3 days' worth of data). yc will have the corresponding closing price value
    # pdb.set_trace()
    X_train, X_test, = split_train_test(X_scale_dataset)
    y_train, y_test, = split_train_test(y_scale_dataset)
    yc_train,yc_test = split_train_test(low_vol_data)
    yc = yc_test.values.tolist()
    y_train_list = y_train.flatten().tolist()
    y_test_list = y_test.flatten().tolist()
    # yc_train, yc_test, = split_train_test(original_data)
    index_train, index_test, = predict_index(dataset_final, X_train, n_steps_in, n_steps_out)

    # Initialize model
    model = auto_arima(y_train_list,exogenous  = X_train,trace=True, error_action='ignore', start_p=1,start_q=1,max_p=3,max_q=3,d=3,
            suppress_warnings=True,stepwise=True,seasonal=True)

      # Determine model parameters
    print(model.summary())
    model.fit(y_train_list,maxiter=200)
    order = model.get_params()['order']
    print('ARIMA order:', order, '\n')

      # Genereate predictions
    prediction = []
    for i in range(len(y_test_list)):
        model = pmdarima.ARIMA(order=order)
        model.fit(y_train_list)
        # print('working on', i+1, 'of', len(y_test), '-- ' + str(int(100 * (i + 1) / len(y_test))) + '% complete')

        prediction.append(model.predict()[0])
        y_train_list.append(y_test_list[i])

    predictionte = y_scaler.inverse_transform(np.array(prediction).reshape(-1,1))
    y_test_ = y_scaler.inverse_transform(np.array(y_test_list).reshape(-1,1))

    # Generate error data
    mse = mean_squared_error(yc_test, predictionte)
    rmse = mse ** 0.5
    mae = mean_absolute_error(y_test_ , predictionte )
    return yc,predictionte.flatten().tolist(), mse, rmse, mae
In [ ]:
def get_lstm(data,original_data, train_len, test_len,img_file,ma ,lstm_len=3):
    # prepare train and test data
    X_value = pd.DataFrame(data.iloc[:, :])
    y_value = pd.DataFrame(data.iloc[:, 3])
    X_scaler = MinMaxScaler(feature_range=(-1, 1))
    y_scaler = MinMaxScaler(feature_range=(-1, 1))
    X_scaler.fit(X_value)
    y_scaler.fit(y_value)
    # Get data and check shape
    X, y, yc = get_X_y(X_scale_dataset, y_scale_dataset)#X will be of shape 224 X 3 X 21 (each 3 X 21 array will be 3 days' worth of data). yc will have the corresponding closing price value
    # pdb.set_trace()
    X_train, X_test, = split_train_test(X)
    y_train, y_test, = split_train_test(y)
    # yc_train, yc_test, = split_train_test(original_data)
    index_train, index_test, = predict_index(dataset_final, X_train, n_steps_in, n_steps_out)
    det =20
    input_dim = X_train.shape[1]#3
    feature_size = X_train.shape[2]#24
    output_dim = y_train.shape[1]#1



    # Option 1
    # Set up & fit LSTM RNN
    # model = Sequential()
    # model.add(LSTM(256, activation='relu', kernel_initializer='he_normal', input_shape=(input_dim, feature_size)))
    # model.add(Dense(units=64,activation='relu'))
    # model.add(Dropout(0.5))
    # model.add(Dense(units=output_dim))
    # model.compile(optimizer=Adam(learning_rate = 0.001), loss='mse')

    # ## Common code
    # callbacks = [
    # EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    # ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    # ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    # fname1 = img_file+'.png'
    # tensorflow.keras.utils.plot_model(
    #     model, to_file=fname1, show_shapes=True, show_dtype=False,
    #     show_layer_names=True, expand_nested=False, dpi=96,
    #     layer_range=None, show_layer_activations=False
    # )
    # history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # # plot loss
    # fname2 = img_file+'-'+ma
    # plt.title(img_file+'-'+ma_' Loss')
    # plt.xlabel("Epochs")
    # plt.ylabel("Loss")
    # pyplot.plot(history.history['loss'], label='train')
    # pyplot.plot(history.history['val_loss'], label='validation')
    # pyplot.legend()
    # pyplot.savefig(fname2+'.png',dpi='figure')
    pyplot.show()


    # # option 2
    model = Sequential()
    model.add(Bidirectional(LSTM(units= 128), input_shape=(input_dim, feature_size)))
    model.add(Dense(64))
    model.add(Dense(units=output_dim))
    model.compile(optimizer=Adam(lr = 0.001), loss='mean_squared_error', metrics=['accuracy'])
    # Common code
    callbacks = [
    EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    ModelCheckpoint('LSTM6.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    fname1 = img_file+'.png'
    tensorflow.keras.utils.plot_model(
        model, to_file=fname1, show_shapes=True, show_dtype=False,
        show_layer_names=True, expand_nested=False, dpi=96,
        layer_range=None, show_layer_activations=False
    )
    history = model.fit(X_train, y_train, epochs=500, batch_size=int( optimized_period[ma]), verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # plot loss
    fname2 = img_file+'-'+ma
    plt.title(img_file+'-'+ma+' Loss')
    plt.xlabel("Epochs")
    plt.ylabel("Loss")
    pyplot.plot(history.history['loss'], label='train')
    pyplot.plot(history.history['val_loss'], label='validation')
    pyplot.legend()
    pyplot.savefig(fname2+'.png',dpi='figure')
    pyplot.show()

    # Option 3
    # define custom activation
    # 
    # class Double_Tanh(Activation):
    #     def __init__(self, activation, **kwargs):
    #         super(Double_Tanh, self).__init__(activation, **kwargs)
    #         self.__name__ = 'double_tanh'

    # def double_tanh(x):
    #     return (K.tanh(x) * 2)

    # get_custom_objects().update({'double_tanh':Double_Tanh(double_tanh)})
    #     # Model Generation
    # model = Sequential()
    # #check https://machinelearningmastery.com/use-weight-regularization-lstm-networks-time-series-forecasting/
    # model.add(LSTM(25, input_shape=(input_dim, feature_size), dropout=0.2, kernel_regularizer=l1_l2(0.00,0.00), bias_regularizer=l1_l2(0.00,0.00)))
    # model.add(Dense(1))
    # model.add(Activation(double_tanh))
    # model.compile(loss='mean_squared_error', optimizer='adam', metrics=['mse', 'mae'])
    # # Common code
    # callbacks = [
    # EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    # ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    # ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    # fname1 = img_file+'.png'
    # tensorflow.keras.utils.plot_model(
    #     model, to_file=fname1, show_shapes=True, show_dtype=False,
    #     show_layer_names=True, expand_nested=False, dpi=96,
    #     layer_range=None, show_layer_activations=False
    # )
    # history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # # plot loss
    # fname2 = img_file+'-'+ma
    # plt.title(img_file+'-'+ma_' Loss')
    # plt.xlabel("Epochs")
    # plt.ylabel("Loss")
    # pyplot.plot(history.history['loss'], label='train')
    # pyplot.plot(history.history['val_loss'], label='validation')
    # pyplot.legend()
    # pyplot.savefig(fname2+'.png',dpi='figure')
    # pyplot.show()

    # Option 4
    # Set up & fit LSTM RNN
    # model = Sequential()
    # model.add(LSTM(units=lstm_len, return_sequences=True, input_shape=(x_train.shape[1], 1)))
    # model.add(LSTM(units=int(lstm_len/2)))
    # model.add(Dense(1, activation='sigmoid'))
    # model.compile(loss='mean_squared_error', optimizer='adam')
    # # Common code
    # callbacks = [
    # EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    # ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    # ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    # fname1 = img_file+'.png'
    # tensorflow.keras.utils.plot_model(
    #     model, to_file=fname1, show_shapes=True, show_dtype=False,
    #     show_layer_names=True, expand_nested=False, dpi=96,
    #     layer_range=None, show_layer_activations=False
    # )
    # history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # # plot loss
    # fname2 = img_file+'-'+ma
    # plt.title(img_file+'-'+ma_' Loss')
    # plt.xlabel("Epochs")
    # plt.ylabel("Loss")
    # pyplot.plot(history.history['loss'], label='train')
    # pyplot.plot(history.history['val_loss'], label='validation')
    # pyplot.legend()
    # pyplot.savefig(fname2+'.png',dpi='figure')
    # pyplot.show()



    # Generate predictions
    predictiontr = model.predict(X_train, verbose=0)
    predictiontr = y_scaler.inverse_transform(predictiontr).tolist()
    outputtr = []
    for i in range(len(predictiontr)):
        outputtr.extend(predictiontr[i])
    predictiontr = outputtr
    # Generate error data

    ## replace with yc , xtest generated by new multistep method
    mse_tr = mean_squared_error(y_train, predictiontr)
    rmse_tr = mse_tr ** 0.5
    # mape = mean_absolute_percentage_error(X_test, pd.Series(predictiontr))
    mae_tr = mean_absolute_error(y_train, pd.Series(predictiontr))
    # Original_tr = pd.Series(yc_train)
    Original_tr = y_scaler.inverse_transform(y_train).flatten().tolist()


    predictionte = model.predict(X_test, verbose=0)
    predictionte = (y_scaler.inverse_transform(predictionte)-det).tolist()
    outputte = []
    for i in range(len(predictionte)):
        outputte.extend(predictionte[i])
    predictionte = outputte
    # Generate error data

    mse_te = mean_squared_error(y_test, predictionte)
    rmse_te = mse_te ** 0.5
    # mape = mean_absolute_percentage_error(X_test, pd.Series(predictionte))
    mae_te = mean_absolute_error(y_test, pd.Series(predictionte))
    # Original_te = pd.Series(yc_test)
    Original_te = y_scaler.inverse_transform(y_test).flatten().tolist()

    return Original_tr, predictiontr, mse_tr, rmse_tr,mae_tr,Original_te,predictionte, mse_te,rmse_te,mae_te
In [ ]:
if __name__ == '__main__':
    start_time = timeit.default_timer()
    simulation6 = {}
    imgfile = 'Experiment6'
    for ma in optimized_period:
                print(ma)
                print(functions[ma])
                print ( int( optimized_period[ma]))
              # if ma == 'SMA':
                low_vol = df.apply(lambda c:  functions[ma](c, timeperiod = int( optimized_period[ma])))
                low_vol = low_vol.fillna(0)
                low_vol_data = df['close']
                high_vol = pd.DataFrame()
                df2 = df.copy()
                for i in df2.columns:
                  if i in low_vol.columns:
                    high_vol[i] = df2[i].subtract(low_vol[i], fill_value=0)
                high_vol_data = df['close']
                ## *****************************************************
                # Generate ARIMA and LSTM predictions
                print('\nWorking on ' + ma + ' predictions')
                try:
                  print('parameters used : ', train_len, test_len)
                  low_vol_Original, low_vol_prediction, low_vol_mse, low_vol_rmse,low_vol_mae = get_arima_exog(low_vol,low_vol_data, train_len, test_len)
                except:
                    print('ARIMA error, skipping to next MA type')
                    continue
                Original_tr, predictiontr, mse_tr, rmse_tr,mae_tr,high_vol_Original, high_vol_prediction, high_vol_mse, high_vol_rmse,high_vol_mae, = get_lstm(high_vol,high_vol_data, train_len, test_len,imgfile,ma)
                final_prediction_tr = df['close'].head(train_len).values + pd.Series(predictiontr) # ignoring first 3 steps 
                mse_ftr = mean_squared_error(df['close'].head(train_len).values,final_prediction_tr.values)
                rmse_ftr = mse_ftr ** 0.5
                mape_ftr = mean_absolute_percentage_error(df['close'].head(train_len).reset_index(drop=True), final_prediction_tr)
                mae_ftr = mean_absolute_error(df['close'].head(train_len).reset_index(drop=True), final_prediction_tr)

                final_prediction = pd.Series(low_vol_prediction[3:]) + pd.Series(high_vol_prediction)
                mse = mean_squared_error(df['close'].tail(test_len).values,final_prediction.values)
                rmse = mse ** 0.5
                mape = mean_absolute_percentage_error(df['close'].tail(test_len).reset_index(drop=True), final_prediction)
                mae = mean_absolute_error(df['close'].tail(test_len).reset_index(drop=True), final_prediction)
                # Generate prediction accuracy
                actual = df['close'].tail(test_len).values
                result_1 = []
                result_2 = []
                for i in range(1, len(final_prediction)):
                    # Compare prediction to previous close price
                    if final_prediction[i] > actual[i-1] and actual[i] > actual[i-1]:
                        result_1.append(1)
                    elif final_prediction[i] < actual[i-1] and actual[i] < actual[i-1]:
                        result_1.append(1)
                    else:
                        result_1.append(0)

                    # Compare prediction to previous prediction
                    if final_prediction[i] > final_prediction[i-1] and actual[i] > actual[i-1]:
                        result_2.append(1)
                    elif final_prediction[i] < final_prediction[i-1] and actual[i] < actual[i-1]:
                        result_2.append(1)
                    else:
                        result_2.append(0)

                accuracy_1 = np.mean(result_1)
                accuracy_2 = np.mean(result_2)

                simulation6[ma] = {'low_vol': {'original':list(low_vol_Original), 'prediction': list(low_vol_prediction) , 'mse': low_vol_mse,
                                              'rmse': low_vol_rmse, 'mae' : low_vol_mae},
                                  'high_vol': {'original':list(high_vol_Original),'prediction': list(high_vol_prediction), 'mse': high_vol_mse,
                                              'rmse': high_vol_rmse, 'mae' : high_vol_mae},
                                  'final_tr': {'original':df['close'].head(train_len).tolist(),'prediction': final_prediction_tr.values.tolist(), 'mse': mse_ftr,
                                              'rmse': rmse_ftr, 'mae' : mae_ftr},
                                  'final': {'original': df['close'].tail(test_len).tolist(), 'prediction': final_prediction.values.tolist(), 'mse': mse,
                                            'rmse': rmse, 'mae': mae },
                                  'accuracy': {'prediction vs close': accuracy_1, 'prediction vs prediction': accuracy_2}}

                # save simulation data here as checkpoint
                with open('simulation6_data.json', 'w') as fp:
                    json.dump(simulation6, fp)

                for ma in simulation6.keys():
                    print('\n' + ma)
                    print('Prediction vs Close:\t\t' + str(round(100*simulation6[ma]['accuracy']['prediction vs close'], 2))
                          + '% Accuracy')
                    print('Prediction vs Prediction:\t' + str(round(100*simulation6[ma]['accuracy']['prediction vs prediction'], 2))
                          + '% Accuracy')
                    print('MSE:\t', simulation6[ma]['final']['mse'],
                          '\nRMSE:\t', simulation6[ma]['final']['rmse'],
                          '\nMAPE:\t', simulation6[ma]['final']['mae'])#,
                          # '\nMAPE:\t', simulation[ma]['final']['mape'])
              # else:
              #   break
    elapsed = timeit.default_timer() - start_time
    print('Runtime: mins:',elapsed/60)
SMA
SMA([input_arrays], [timeperiod=30])

Simple Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
17

Working on SMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-15000.708, Time=8.47 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-13492.284, Time=2.32 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-15827.971, Time=7.98 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-13635.197, Time=9.87 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=-14132.778, Time=3.91 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=-15140.312, Time=9.72 sec
 ARIMA(1,3,0)(0,0,0)[0] intercept   : AIC=-13970.469, Time=7.17 sec

Best model:  ARIMA(1,3,0)(0,0,0)[0]          
Total fit time: 49.461 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(1, 3, 0)   Log Likelihood                7936.985
Date:                Sun, 12 Dec 2021   AIC                         -15827.971
Time:                        16:37:24   BIC                         -15720.081
Sample:                             0   HQIC                        -15786.537
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1         -4.786e-05      0.001     -0.066      0.947      -0.001       0.001
x2         -4.789e-05      0.001     -0.085      0.932      -0.001       0.001
x3         -4.819e-05      0.000     -0.105      0.917      -0.001       0.001
x4             1.0000      0.001   1557.248      0.000       0.999       1.001
x5         -4.579e-05      0.001     -0.071      0.943      -0.001       0.001
x6          -5.16e-05      0.000     -0.432      0.666      -0.000       0.000
x7         -4.778e-05      0.000     -0.278      0.781      -0.000       0.000
x8            -0.0012      0.000     -7.403      0.000      -0.002      -0.001
x9         -3.454e-06      0.002     -0.002      0.998      -0.003       0.003
x10           -0.0005      0.001     -0.403      0.687      -0.003       0.002
x11            0.0029      0.000     10.904      0.000       0.002       0.003
x12           -0.0003      0.000     -1.815      0.069      -0.001    2.06e-05
x13        -4.809e-05      0.000     -0.157      0.875      -0.001       0.001
x14           -0.0001      0.000     -0.482      0.630      -0.001       0.000
x15        -5.214e-05      0.000     -0.273      0.785      -0.000       0.000
x16        -4.468e-05      0.000     -0.125      0.901      -0.001       0.001
x17        -4.224e-05      0.000     -0.202      0.840      -0.000       0.000
x18        -8.086e-05      0.000     -0.270      0.787      -0.001       0.001
x19        -5.537e-05      0.000     -0.244      0.807      -0.000       0.000
x20         8.423e-05      0.000      0.333      0.739      -0.000       0.001
x21        -4.232e-05      0.000     -0.166      0.868      -0.001       0.000
ar.L1         -0.6666   6.03e-06  -1.11e+05      0.000      -0.667      -0.667
sigma2      4.093e-10   8.97e-11      4.563      0.000    2.33e-10    5.85e-10
===================================================================================
Ljung-Box (L1) (Q):                  60.24   Jarque-Bera (JB):           1334882.31
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.11   Skew:                            -3.81
Prob(H) (two-sided):                  0.00   Kurtosis:                       202.35
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 5.73e+20. Standard errors may be unstable.
ARIMA order: (1, 3, 0) 

/usr/local/lib/python3.7/dist-packages/keras/optimizer_v2/adam.py:105: UserWarning: The `lr` argument is deprecated, use `learning_rate` instead.
  super(Adam, self).__init__(name, **kwargs)
Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.07591, saving model to LSTM6.h5
48/48 - 4s - loss: 0.1164 - accuracy: 0.0000e+00 - val_loss: 0.0759 - val_accuracy: 0.0037 - lr: 0.0010 - 4s/epoch - 81ms/step
Epoch 2/500

Epoch 00002: val_loss improved from 0.07591 to 0.01860, saving model to LSTM6.h5
48/48 - 0s - loss: 0.0536 - accuracy: 0.0000e+00 - val_loss: 0.0186 - val_accuracy: 0.0037 - lr: 0.0010 - 246ms/epoch - 5ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.01860
48/48 - 0s - loss: 0.0141 - accuracy: 0.0000e+00 - val_loss: 0.0443 - val_accuracy: 0.0037 - lr: 0.0010 - 230ms/epoch - 5ms/step
Epoch 4/500

Epoch 00004: val_loss improved from 0.01860 to 0.00820, saving model to LSTM6.h5
48/48 - 0s - loss: 0.0156 - accuracy: 0.0000e+00 - val_loss: 0.0082 - val_accuracy: 0.0037 - lr: 0.0010 - 263ms/epoch - 5ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.00820
48/48 - 0s - loss: 0.0028 - accuracy: 0.0000e+00 - val_loss: 0.0182 - val_accuracy: 0.0037 - lr: 0.0010 - 235ms/epoch - 5ms/step
Epoch 6/500

Epoch 00006: val_loss improved from 0.00820 to 0.00325, saving model to LSTM6.h5
48/48 - 0s - loss: 0.0035 - accuracy: 0.0000e+00 - val_loss: 0.0033 - val_accuracy: 0.0037 - lr: 0.0010 - 274ms/epoch - 6ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.00325
48/48 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0102 - val_accuracy: 0.0037 - lr: 0.0010 - 220ms/epoch - 5ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.00325
48/48 - 0s - loss: 0.0019 - accuracy: 0.0000e+00 - val_loss: 0.0055 - val_accuracy: 0.0037 - lr: 0.0010 - 232ms/epoch - 5ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.00325
48/48 - 0s - loss: 0.0019 - accuracy: 0.0000e+00 - val_loss: 0.0088 - val_accuracy: 0.0037 - lr: 0.0010 - 238ms/epoch - 5ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.00325
48/48 - 0s - loss: 0.0043 - accuracy: 0.0000e+00 - val_loss: 0.0061 - val_accuracy: 0.0037 - lr: 0.0010 - 223ms/epoch - 5ms/step
Epoch 11/500

Epoch 00011: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00011: val_loss did not improve from 0.00325
48/48 - 0s - loss: 0.0119 - accuracy: 0.0000e+00 - val_loss: 0.0064 - val_accuracy: 0.0037 - lr: 0.0010 - 218ms/epoch - 5ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.00325
48/48 - 0s - loss: 0.0213 - accuracy: 0.0000e+00 - val_loss: 0.0077 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 244ms/epoch - 5ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.00325
48/48 - 0s - loss: 0.0040 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 260ms/epoch - 5ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.00325
48/48 - 0s - loss: 0.0028 - accuracy: 0.0000e+00 - val_loss: 0.0041 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 229ms/epoch - 5ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.00325
48/48 - 0s - loss: 0.0019 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 226ms/epoch - 5ms/step
Epoch 16/500

Epoch 00016: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00016: val_loss improved from 0.00325 to 0.00320, saving model to LSTM6.h5
48/48 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0032 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 241ms/epoch - 5ms/step
Epoch 17/500

Epoch 00017: val_loss improved from 0.00320 to 0.00320, saving model to LSTM6.h5
48/48 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0032 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 287ms/epoch - 6ms/step
Epoch 18/500

Epoch 00018: val_loss improved from 0.00320 to 0.00319, saving model to LSTM6.h5
48/48 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0032 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 248ms/epoch - 5ms/step
Epoch 19/500

Epoch 00019: val_loss improved from 0.00319 to 0.00317, saving model to LSTM6.h5
48/48 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0032 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 237ms/epoch - 5ms/step
Epoch 20/500

Epoch 00020: val_loss improved from 0.00317 to 0.00315, saving model to LSTM6.h5
48/48 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0031 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 243ms/epoch - 5ms/step
Epoch 21/500

Epoch 00021: val_loss improved from 0.00315 to 0.00313, saving model to LSTM6.h5
48/48 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0031 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 280ms/epoch - 6ms/step
Epoch 22/500

Epoch 00022: val_loss improved from 0.00313 to 0.00310, saving model to LSTM6.h5
48/48 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0031 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 244ms/epoch - 5ms/step
Epoch 23/500

Epoch 00023: val_loss improved from 0.00310 to 0.00308, saving model to LSTM6.h5
48/48 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0031 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 243ms/epoch - 5ms/step
Epoch 24/500

Epoch 00024: val_loss improved from 0.00308 to 0.00306, saving model to LSTM6.h5
48/48 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0031 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 247ms/epoch - 5ms/step
Epoch 25/500

Epoch 00025: val_loss improved from 0.00306 to 0.00304, saving model to LSTM6.h5
48/48 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0030 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 264ms/epoch - 6ms/step
Epoch 26/500

Epoch 00026: val_loss improved from 0.00304 to 0.00302, saving model to LSTM6.h5
48/48 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0030 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 243ms/epoch - 5ms/step
Epoch 27/500

Epoch 00027: val_loss improved from 0.00302 to 0.00300, saving model to LSTM6.h5
48/48 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0030 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 254ms/epoch - 5ms/step
Epoch 28/500

Epoch 00028: val_loss improved from 0.00300 to 0.00299, saving model to LSTM6.h5
48/48 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0030 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 267ms/epoch - 6ms/step
Epoch 29/500

Epoch 00029: val_loss improved from 0.00299 to 0.00297, saving model to LSTM6.h5
48/48 - 0s - loss: 9.9052e-04 - accuracy: 0.0000e+00 - val_loss: 0.0030 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 250ms/epoch - 5ms/step
Epoch 30/500

Epoch 00030: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00030: val_loss improved from 0.00297 to 0.00296, saving model to LSTM6.h5
48/48 - 0s - loss: 9.8140e-04 - accuracy: 0.0000e+00 - val_loss: 0.0030 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 255ms/epoch - 5ms/step
Epoch 31/500

Epoch 00031: val_loss improved from 0.00296 to 0.00295, saving model to LSTM6.h5
48/48 - 0s - loss: 9.7271e-04 - accuracy: 0.0000e+00 - val_loss: 0.0029 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 242ms/epoch - 5ms/step
Epoch 32/500

Epoch 00032: val_loss improved from 0.00295 to 0.00294, saving model to LSTM6.h5
48/48 - 0s - loss: 9.6446e-04 - accuracy: 0.0000e+00 - val_loss: 0.0029 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 251ms/epoch - 5ms/step
Epoch 33/500

Epoch 00033: val_loss improved from 0.00294 to 0.00293, saving model to LSTM6.h5
48/48 - 0s - loss: 9.5663e-04 - accuracy: 0.0000e+00 - val_loss: 0.0029 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 292ms/epoch - 6ms/step
Epoch 34/500

Epoch 00034: val_loss improved from 0.00293 to 0.00292, saving model to LSTM6.h5
48/48 - 0s - loss: 9.4922e-04 - accuracy: 0.0000e+00 - val_loss: 0.0029 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 243ms/epoch - 5ms/step
Epoch 35/500

Epoch 00035: val_loss improved from 0.00292 to 0.00292, saving model to LSTM6.h5
48/48 - 0s - loss: 9.4222e-04 - accuracy: 0.0000e+00 - val_loss: 0.0029 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 239ms/epoch - 5ms/step
Epoch 36/500

Epoch 00036: val_loss improved from 0.00292 to 0.00291, saving model to LSTM6.h5
48/48 - 0s - loss: 9.3562e-04 - accuracy: 0.0000e+00 - val_loss: 0.0029 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 259ms/epoch - 5ms/step
Epoch 37/500

Epoch 00037: val_loss improved from 0.00291 to 0.00291, saving model to LSTM6.h5
48/48 - 0s - loss: 9.2941e-04 - accuracy: 0.0000e+00 - val_loss: 0.0029 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 258ms/epoch - 5ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.00291
48/48 - 0s - loss: 9.2358e-04 - accuracy: 0.0000e+00 - val_loss: 0.0029 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 226ms/epoch - 5ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.00291
48/48 - 0s - loss: 9.1810e-04 - accuracy: 0.0000e+00 - val_loss: 0.0029 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 231ms/epoch - 5ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.00291
48/48 - 0s - loss: 9.1297e-04 - accuracy: 0.0000e+00 - val_loss: 0.0029 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 237ms/epoch - 5ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.00291
48/48 - 0s - loss: 9.0817e-04 - accuracy: 0.0000e+00 - val_loss: 0.0029 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 234ms/epoch - 5ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.00291
48/48 - 0s - loss: 9.0367e-04 - accuracy: 0.0000e+00 - val_loss: 0.0029 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 232ms/epoch - 5ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.00291
48/48 - 0s - loss: 8.9947e-04 - accuracy: 0.0000e+00 - val_loss: 0.0029 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 226ms/epoch - 5ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.00291
48/48 - 0s - loss: 8.9553e-04 - accuracy: 0.0000e+00 - val_loss: 0.0030 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 249ms/epoch - 5ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.00291
48/48 - 0s - loss: 8.9186e-04 - accuracy: 0.0000e+00 - val_loss: 0.0030 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 257ms/epoch - 5ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.00291
48/48 - 0s - loss: 8.8842e-04 - accuracy: 0.0000e+00 - val_loss: 0.0030 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 235ms/epoch - 5ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.00291
48/48 - 0s - loss: 8.8520e-04 - accuracy: 0.0000e+00 - val_loss: 0.0030 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 220ms/epoch - 5ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.00291
48/48 - 0s - loss: 8.8218e-04 - accuracy: 0.0000e+00 - val_loss: 0.0030 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 231ms/epoch - 5ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.00291
48/48 - 0s - loss: 8.7934e-04 - accuracy: 0.0000e+00 - val_loss: 0.0030 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 261ms/epoch - 5ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.00291
48/48 - 0s - loss: 8.7667e-04 - accuracy: 0.0000e+00 - val_loss: 0.0030 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 221ms/epoch - 5ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.00291
48/48 - 0s - loss: 8.7416e-04 - accuracy: 0.0000e+00 - val_loss: 0.0031 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 234ms/epoch - 5ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.00291
48/48 - 0s - loss: 8.7178e-04 - accuracy: 0.0000e+00 - val_loss: 0.0031 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 233ms/epoch - 5ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.00291
48/48 - 0s - loss: 8.6953e-04 - accuracy: 0.0000e+00 - val_loss: 0.0031 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 242ms/epoch - 5ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.00291
48/48 - 0s - loss: 8.6738e-04 - accuracy: 0.0000e+00 - val_loss: 0.0031 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 235ms/epoch - 5ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.00291
48/48 - 0s - loss: 8.6533e-04 - accuracy: 0.0000e+00 - val_loss: 0.0032 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 224ms/epoch - 5ms/step
Epoch 56/500

Epoch 00056: val_loss did not improve from 0.00291
48/48 - 0s - loss: 8.6337e-04 - accuracy: 0.0000e+00 - val_loss: 0.0032 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 225ms/epoch - 5ms/step
Epoch 57/500

Epoch 00057: val_loss did not improve from 0.00291
48/48 - 0s - loss: 8.6149e-04 - accuracy: 0.0000e+00 - val_loss: 0.0032 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 241ms/epoch - 5ms/step
Epoch 58/500

Epoch 00058: val_loss did not improve from 0.00291
48/48 - 0s - loss: 8.5966e-04 - accuracy: 0.0000e+00 - val_loss: 0.0032 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 227ms/epoch - 5ms/step
Epoch 59/500

Epoch 00059: val_loss did not improve from 0.00291
48/48 - 0s - loss: 8.5790e-04 - accuracy: 0.0000e+00 - val_loss: 0.0033 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 219ms/epoch - 5ms/step
Epoch 60/500

Epoch 00060: val_loss did not improve from 0.00291
48/48 - 0s - loss: 8.5618e-04 - accuracy: 0.0000e+00 - val_loss: 0.0033 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 232ms/epoch - 5ms/step
Epoch 61/500

Epoch 00061: val_loss did not improve from 0.00291
48/48 - 0s - loss: 8.5450e-04 - accuracy: 0.0000e+00 - val_loss: 0.0033 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 239ms/epoch - 5ms/step
Epoch 62/500

Epoch 00062: val_loss did not improve from 0.00291
48/48 - 0s - loss: 8.5286e-04 - accuracy: 0.0000e+00 - val_loss: 0.0034 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 226ms/epoch - 5ms/step
Epoch 63/500

Epoch 00063: val_loss did not improve from 0.00291
48/48 - 0s - loss: 8.5124e-04 - accuracy: 0.0000e+00 - val_loss: 0.0034 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 229ms/epoch - 5ms/step
Epoch 64/500

Epoch 00064: val_loss did not improve from 0.00291
48/48 - 0s - loss: 8.4964e-04 - accuracy: 0.0000e+00 - val_loss: 0.0034 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 249ms/epoch - 5ms/step
Epoch 65/500

Epoch 00065: val_loss did not improve from 0.00291
48/48 - 0s - loss: 8.4807e-04 - accuracy: 0.0000e+00 - val_loss: 0.0034 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 237ms/epoch - 5ms/step
Epoch 66/500

Epoch 00066: val_loss did not improve from 0.00291
48/48 - 0s - loss: 8.4650e-04 - accuracy: 0.0000e+00 - val_loss: 0.0035 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 233ms/epoch - 5ms/step
Epoch 67/500

Epoch 00067: val_loss did not improve from 0.00291
48/48 - 0s - loss: 8.4494e-04 - accuracy: 0.0000e+00 - val_loss: 0.0035 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 232ms/epoch - 5ms/step
Epoch 68/500

Epoch 00068: val_loss did not improve from 0.00291
48/48 - 0s - loss: 8.4339e-04 - accuracy: 0.0000e+00 - val_loss: 0.0035 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 235ms/epoch - 5ms/step
Epoch 69/500

Epoch 00069: val_loss did not improve from 0.00291
48/48 - 0s - loss: 8.4184e-04 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 225ms/epoch - 5ms/step
Epoch 70/500

Epoch 00070: val_loss did not improve from 0.00291
48/48 - 0s - loss: 8.4028e-04 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 241ms/epoch - 5ms/step
Epoch 71/500

Epoch 00071: val_loss did not improve from 0.00291
48/48 - 0s - loss: 8.3873e-04 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 218ms/epoch - 5ms/step
Epoch 72/500

Epoch 00072: val_loss did not improve from 0.00291
48/48 - 0s - loss: 8.3717e-04 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 224ms/epoch - 5ms/step
Epoch 73/500

Epoch 00073: val_loss did not improve from 0.00291
48/48 - 0s - loss: 8.3561e-04 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 230ms/epoch - 5ms/step
Epoch 74/500

Epoch 00074: val_loss did not improve from 0.00291
48/48 - 0s - loss: 8.3404e-04 - accuracy: 0.0000e+00 - val_loss: 0.0038 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 223ms/epoch - 5ms/step
Epoch 75/500

Epoch 00075: val_loss did not improve from 0.00291
48/48 - 0s - loss: 8.3246e-04 - accuracy: 0.0000e+00 - val_loss: 0.0038 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 222ms/epoch - 5ms/step
Epoch 76/500

Epoch 00076: val_loss did not improve from 0.00291
48/48 - 0s - loss: 8.3087e-04 - accuracy: 0.0000e+00 - val_loss: 0.0038 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 212ms/epoch - 4ms/step
Epoch 77/500

Epoch 00077: val_loss did not improve from 0.00291
48/48 - 0s - loss: 8.2927e-04 - accuracy: 0.0000e+00 - val_loss: 0.0039 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 220ms/epoch - 5ms/step
Epoch 78/500

Epoch 00078: val_loss did not improve from 0.00291
48/48 - 0s - loss: 8.2766e-04 - accuracy: 0.0000e+00 - val_loss: 0.0039 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 227ms/epoch - 5ms/step
Epoch 79/500

Epoch 00079: val_loss did not improve from 0.00291
48/48 - 0s - loss: 8.2604e-04 - accuracy: 0.0000e+00 - val_loss: 0.0039 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 223ms/epoch - 5ms/step
Epoch 80/500

Epoch 00080: val_loss did not improve from 0.00291
48/48 - 0s - loss: 8.2441e-04 - accuracy: 0.0000e+00 - val_loss: 0.0040 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 225ms/epoch - 5ms/step
Epoch 81/500

Epoch 00081: val_loss did not improve from 0.00291
48/48 - 0s - loss: 8.2276e-04 - accuracy: 0.0000e+00 - val_loss: 0.0040 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 243ms/epoch - 5ms/step
Epoch 82/500

Epoch 00082: val_loss did not improve from 0.00291
48/48 - 0s - loss: 8.2110e-04 - accuracy: 0.0000e+00 - val_loss: 0.0041 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 223ms/epoch - 5ms/step
Epoch 83/500

Epoch 00083: val_loss did not improve from 0.00291
48/48 - 0s - loss: 8.1943e-04 - accuracy: 0.0000e+00 - val_loss: 0.0041 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 232ms/epoch - 5ms/step
Epoch 84/500

Epoch 00084: val_loss did not improve from 0.00291
48/48 - 0s - loss: 8.1775e-04 - accuracy: 0.0000e+00 - val_loss: 0.0042 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 231ms/epoch - 5ms/step
Epoch 85/500

Epoch 00085: val_loss did not improve from 0.00291
48/48 - 0s - loss: 8.1606e-04 - accuracy: 0.0000e+00 - val_loss: 0.0042 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 234ms/epoch - 5ms/step
Epoch 86/500

Epoch 00086: val_loss did not improve from 0.00291
48/48 - 0s - loss: 8.1435e-04 - accuracy: 0.0000e+00 - val_loss: 0.0042 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 245ms/epoch - 5ms/step
Epoch 87/500

Epoch 00087: val_loss did not improve from 0.00291
48/48 - 0s - loss: 8.1263e-04 - accuracy: 0.0000e+00 - val_loss: 0.0043 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 225ms/epoch - 5ms/step
Epoch 00087: early stopping
SMA
Prediction vs Close:		53.73% Accuracy
Prediction vs Prediction:	47.01% Accuracy
MSE:	 71.66891031373025 
RMSE:	 8.465749247038342 
MAPE:	 6.880610177712922
EMA
EMA([input_arrays], [timeperiod=30])

Exponential Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
51

Working on EMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-17005.775, Time=2.24 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-14574.593, Time=3.95 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-16801.081, Time=8.73 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-14572.593, Time=5.16 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=-14532.068, Time=6.53 sec
 ARIMA(1,3,2)(0,0,0)[0]             : AIC=-15610.472, Time=11.59 sec
 ARIMA(0,3,2)(0,0,0)[0]             : AIC=-16103.302, Time=13.63 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=-17030.021, Time=4.20 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=-17004.614, Time=3.13 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=-17087.441, Time=5.81 sec
 ARIMA(3,3,2)(0,0,0)[0]             : AIC=inf, Time=15.11 sec
/usr/local/lib/python3.7/dist-packages/statsmodels/tsa/statespace/sarimax.py:1906: RuntimeWarning: divide by zero encountered in reciprocal
  return np.roots(self.polynomial_reduced_ma)**-1
 ARIMA(2,3,2)(0,0,0)[0]             : AIC=-17003.984, Time=3.06 sec
 ARIMA(3,3,1)(0,0,0)[0] intercept   : AIC=-16998.666, Time=3.42 sec

Best model:  ARIMA(3,3,1)(0,0,0)[0]          
Total fit time: 86.580 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 1)   Log Likelihood                8569.720
Date:                Sun, 12 Dec 2021   AIC                         -17087.441
Time:                        16:39:56   BIC                         -16965.479
Sample:                             0   HQIC                        -17040.602
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1         -2.316e-10   9.87e-21  -2.35e+10      0.000   -2.32e-10   -2.32e-10
x2         -2.308e-10   9.85e-21  -2.34e+10      0.000   -2.31e-10   -2.31e-10
x3         -2.324e-10   9.88e-21  -2.35e+10      0.000   -2.32e-10   -2.32e-10
x4             1.0000   9.87e-21   1.01e+20      0.000       1.000       1.000
x5         -2.106e-10   9.41e-21  -2.24e+10      0.000   -2.11e-10   -2.11e-10
x6         -7.996e-10   1.74e-20  -4.59e+10      0.000      -8e-10      -8e-10
x7         -2.295e-10   9.82e-21  -2.34e+10      0.000   -2.29e-10   -2.29e-10
x8         -2.244e-10   9.71e-21  -2.31e+10      0.000   -2.24e-10   -2.24e-10
x9         -1.166e-11   1.98e-21   -5.9e+09      0.000   -1.17e-11   -1.17e-11
x10        -4.453e-11   4.22e-21  -1.06e+10      0.000   -4.45e-11   -4.45e-11
x11        -2.219e-10   9.65e-21   -2.3e+10      0.000   -2.22e-10   -2.22e-10
x12        -2.264e-10   9.76e-21  -2.32e+10      0.000   -2.26e-10   -2.26e-10
x13        -2.315e-10   9.87e-21  -2.35e+10      0.000   -2.31e-10   -2.31e-10
x14        -1.766e-09   2.73e-20  -6.48e+10      0.000   -1.77e-09   -1.77e-09
x15        -2.167e-10   9.37e-21  -2.31e+10      0.000   -2.17e-10   -2.17e-10
x16        -5.232e-10   1.49e-20  -3.52e+10      0.000   -5.23e-10   -5.23e-10
x17        -2.147e-10   9.48e-21  -2.27e+10      0.000   -2.15e-10   -2.15e-10
x18        -3.791e-11   3.96e-21  -9.56e+09      0.000   -3.79e-11   -3.79e-11
x19        -2.597e-10   1.05e-20  -2.48e+10      0.000    -2.6e-10    -2.6e-10
x20        -2.417e-10      1e-20  -2.41e+10      0.000   -2.42e-10   -2.42e-10
x21        -4.823e-10    1.4e-20  -3.44e+10      0.000   -4.82e-10   -4.82e-10
ar.L1         -0.4923   1.46e-22  -3.38e+21      0.000      -0.492      -0.492
ar.L2         -0.1923   8.47e-23  -2.27e+21      0.000      -0.192      -0.192
ar.L3         -0.0462   4.02e-23  -1.15e+21      0.000      -0.046      -0.046
ma.L1         -0.7077   3.31e-22  -2.14e+21      0.000      -0.708      -0.708
sigma2       8.99e-11   6.95e-11      1.293      0.196   -4.64e-11    2.26e-10
===================================================================================
Ljung-Box (L1) (Q):                  54.09   Jarque-Bera (JB):           4207353.17
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                             5.48
Prob(H) (two-sided):                  0.00   Kurtosis:                       357.00
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 3.15e+40. Standard errors may be unstable.
ARIMA order: (3, 3, 1) 

/usr/local/lib/python3.7/dist-packages/keras/optimizer_v2/adam.py:105: UserWarning: The `lr` argument is deprecated, use `learning_rate` instead.
  super(Adam, self).__init__(name, **kwargs)
Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.01113, saving model to LSTM6.h5
16/16 - 3s - loss: 0.0891 - accuracy: 0.0000e+00 - val_loss: 0.0111 - val_accuracy: 0.0037 - lr: 0.0010 - 3s/epoch - 216ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.01113
16/16 - 0s - loss: 0.0226 - accuracy: 0.0000e+00 - val_loss: 0.0154 - val_accuracy: 0.0037 - lr: 0.0010 - 97ms/epoch - 6ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.01113
16/16 - 0s - loss: 0.0111 - accuracy: 0.0000e+00 - val_loss: 0.0118 - val_accuracy: 0.0037 - lr: 0.0010 - 89ms/epoch - 6ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.01113
16/16 - 0s - loss: 0.0026 - accuracy: 0.0000e+00 - val_loss: 0.0246 - val_accuracy: 0.0037 - lr: 0.0010 - 98ms/epoch - 6ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.01113
16/16 - 0s - loss: 0.0023 - accuracy: 0.0000e+00 - val_loss: 0.0205 - val_accuracy: 0.0037 - lr: 0.0010 - 89ms/epoch - 6ms/step
Epoch 6/500

Epoch 00006: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00006: val_loss did not improve from 0.01113
16/16 - 0s - loss: 0.0032 - accuracy: 0.0000e+00 - val_loss: 0.0169 - val_accuracy: 0.0037 - lr: 0.0010 - 92ms/epoch - 6ms/step
Epoch 7/500

Epoch 00007: val_loss improved from 0.01113 to 0.01076, saving model to LSTM6.h5
16/16 - 0s - loss: 0.0040 - accuracy: 0.0000e+00 - val_loss: 0.0108 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 104ms/epoch - 7ms/step
Epoch 8/500

Epoch 00008: val_loss improved from 0.01076 to 0.01024, saving model to LSTM6.h5
16/16 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0102 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 129ms/epoch - 8ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.01024
16/16 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0108 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 90ms/epoch - 6ms/step
Epoch 10/500

Epoch 00010: val_loss improved from 0.01024 to 0.01020, saving model to LSTM6.h5
16/16 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0102 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 112ms/epoch - 7ms/step
Epoch 11/500

Epoch 00011: val_loss improved from 0.01020 to 0.00984, saving model to LSTM6.h5
16/16 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0098 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 115ms/epoch - 7ms/step
Epoch 12/500

Epoch 00012: val_loss improved from 0.00984 to 0.00959, saving model to LSTM6.h5
16/16 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0096 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 110ms/epoch - 7ms/step
Epoch 13/500

Epoch 00013: val_loss improved from 0.00959 to 0.00926, saving model to LSTM6.h5
16/16 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0093 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 103ms/epoch - 6ms/step
Epoch 14/500

Epoch 00014: val_loss improved from 0.00926 to 0.00896, saving model to LSTM6.h5
16/16 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0090 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 107ms/epoch - 7ms/step
Epoch 15/500

Epoch 00015: val_loss improved from 0.00896 to 0.00867, saving model to LSTM6.h5
16/16 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0087 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 109ms/epoch - 7ms/step
Epoch 16/500

Epoch 00016: val_loss improved from 0.00867 to 0.00840, saving model to LSTM6.h5
16/16 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0084 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 105ms/epoch - 7ms/step
Epoch 17/500

Epoch 00017: val_loss improved from 0.00840 to 0.00815, saving model to LSTM6.h5
16/16 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0081 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 109ms/epoch - 7ms/step
Epoch 18/500

Epoch 00018: val_loss improved from 0.00815 to 0.00790, saving model to LSTM6.h5
16/16 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0079 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 113ms/epoch - 7ms/step
Epoch 19/500

Epoch 00019: val_loss improved from 0.00790 to 0.00768, saving model to LSTM6.h5
16/16 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0077 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 102ms/epoch - 6ms/step
Epoch 20/500

Epoch 00020: val_loss improved from 0.00768 to 0.00747, saving model to LSTM6.h5
16/16 - 0s - loss: 9.9982e-04 - accuracy: 0.0000e+00 - val_loss: 0.0075 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 117ms/epoch - 7ms/step
Epoch 21/500

Epoch 00021: val_loss improved from 0.00747 to 0.00727, saving model to LSTM6.h5
16/16 - 0s - loss: 9.9141e-04 - accuracy: 0.0000e+00 - val_loss: 0.0073 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 115ms/epoch - 7ms/step
Epoch 22/500

Epoch 00022: val_loss improved from 0.00727 to 0.00708, saving model to LSTM6.h5
16/16 - 0s - loss: 9.8312e-04 - accuracy: 0.0000e+00 - val_loss: 0.0071 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 111ms/epoch - 7ms/step
Epoch 23/500

Epoch 00023: val_loss improved from 0.00708 to 0.00691, saving model to LSTM6.h5
16/16 - 0s - loss: 9.7497e-04 - accuracy: 0.0000e+00 - val_loss: 0.0069 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 109ms/epoch - 7ms/step
Epoch 24/500

Epoch 00024: val_loss improved from 0.00691 to 0.00675, saving model to LSTM6.h5
16/16 - 0s - loss: 9.6695e-04 - accuracy: 0.0000e+00 - val_loss: 0.0067 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 110ms/epoch - 7ms/step
Epoch 25/500

Epoch 00025: val_loss improved from 0.00675 to 0.00660, saving model to LSTM6.h5
16/16 - 0s - loss: 9.5909e-04 - accuracy: 0.0000e+00 - val_loss: 0.0066 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 107ms/epoch - 7ms/step
Epoch 26/500

Epoch 00026: val_loss improved from 0.00660 to 0.00646, saving model to LSTM6.h5
16/16 - 0s - loss: 9.5138e-04 - accuracy: 0.0000e+00 - val_loss: 0.0065 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 111ms/epoch - 7ms/step
Epoch 27/500

Epoch 00027: val_loss improved from 0.00646 to 0.00633, saving model to LSTM6.h5
16/16 - 0s - loss: 9.4382e-04 - accuracy: 0.0000e+00 - val_loss: 0.0063 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 172ms/epoch - 11ms/step
Epoch 28/500

Epoch 00028: val_loss improved from 0.00633 to 0.00621, saving model to LSTM6.h5
16/16 - 0s - loss: 9.3643e-04 - accuracy: 0.0000e+00 - val_loss: 0.0062 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 107ms/epoch - 7ms/step
Epoch 29/500

Epoch 00029: val_loss improved from 0.00621 to 0.00610, saving model to LSTM6.h5
16/16 - 0s - loss: 9.2920e-04 - accuracy: 0.0000e+00 - val_loss: 0.0061 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 111ms/epoch - 7ms/step
Epoch 30/500

Epoch 00030: val_loss improved from 0.00610 to 0.00600, saving model to LSTM6.h5
16/16 - 0s - loss: 9.2214e-04 - accuracy: 0.0000e+00 - val_loss: 0.0060 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 108ms/epoch - 7ms/step
Epoch 31/500

Epoch 00031: val_loss improved from 0.00600 to 0.00591, saving model to LSTM6.h5
16/16 - 0s - loss: 9.1523e-04 - accuracy: 0.0000e+00 - val_loss: 0.0059 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 109ms/epoch - 7ms/step
Epoch 32/500

Epoch 00032: val_loss improved from 0.00591 to 0.00582, saving model to LSTM6.h5
16/16 - 0s - loss: 9.0850e-04 - accuracy: 0.0000e+00 - val_loss: 0.0058 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 103ms/epoch - 6ms/step
Epoch 33/500

Epoch 00033: val_loss improved from 0.00582 to 0.00575, saving model to LSTM6.h5
16/16 - 0s - loss: 9.0192e-04 - accuracy: 0.0000e+00 - val_loss: 0.0058 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 110ms/epoch - 7ms/step
Epoch 34/500

Epoch 00034: val_loss improved from 0.00575 to 0.00569, saving model to LSTM6.h5
16/16 - 0s - loss: 8.9551e-04 - accuracy: 0.0000e+00 - val_loss: 0.0057 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 112ms/epoch - 7ms/step
Epoch 35/500

Epoch 00035: val_loss improved from 0.00569 to 0.00563, saving model to LSTM6.h5
16/16 - 0s - loss: 8.8926e-04 - accuracy: 0.0000e+00 - val_loss: 0.0056 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 110ms/epoch - 7ms/step
Epoch 36/500

Epoch 00036: val_loss improved from 0.00563 to 0.00558, saving model to LSTM6.h5
16/16 - 0s - loss: 8.8317e-04 - accuracy: 0.0000e+00 - val_loss: 0.0056 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 109ms/epoch - 7ms/step
Epoch 37/500

Epoch 00037: val_loss improved from 0.00558 to 0.00554, saving model to LSTM6.h5
16/16 - 0s - loss: 8.7724e-04 - accuracy: 0.0000e+00 - val_loss: 0.0055 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 118ms/epoch - 7ms/step
Epoch 38/500

Epoch 00038: val_loss improved from 0.00554 to 0.00551, saving model to LSTM6.h5
16/16 - 0s - loss: 8.7146e-04 - accuracy: 0.0000e+00 - val_loss: 0.0055 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 113ms/epoch - 7ms/step
Epoch 39/500

Epoch 00039: val_loss improved from 0.00551 to 0.00548, saving model to LSTM6.h5
16/16 - 0s - loss: 8.6583e-04 - accuracy: 0.0000e+00 - val_loss: 0.0055 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 106ms/epoch - 7ms/step
Epoch 40/500

Epoch 00040: val_loss improved from 0.00548 to 0.00546, saving model to LSTM6.h5
16/16 - 0s - loss: 8.6034e-04 - accuracy: 0.0000e+00 - val_loss: 0.0055 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 107ms/epoch - 7ms/step
Epoch 41/500

Epoch 00041: val_loss improved from 0.00546 to 0.00544, saving model to LSTM6.h5
16/16 - 0s - loss: 8.5501e-04 - accuracy: 0.0000e+00 - val_loss: 0.0054 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 105ms/epoch - 7ms/step
Epoch 42/500

Epoch 00042: val_loss improved from 0.00544 to 0.00544, saving model to LSTM6.h5
16/16 - 0s - loss: 8.4981e-04 - accuracy: 0.0000e+00 - val_loss: 0.0054 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 104ms/epoch - 6ms/step
Epoch 43/500

Epoch 00043: val_loss improved from 0.00544 to 0.00543, saving model to LSTM6.h5
16/16 - 0s - loss: 8.4476e-04 - accuracy: 0.0000e+00 - val_loss: 0.0054 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 109ms/epoch - 7ms/step
Epoch 44/500

Epoch 00044: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00044: val_loss did not improve from 0.00543
16/16 - 0s - loss: 8.3984e-04 - accuracy: 0.0000e+00 - val_loss: 0.0054 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 96ms/epoch - 6ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.00543
16/16 - 0s - loss: 8.3053e-04 - accuracy: 0.0000e+00 - val_loss: 0.0054 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 86ms/epoch - 5ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.00543
16/16 - 0s - loss: 8.2878e-04 - accuracy: 0.0000e+00 - val_loss: 0.0055 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 88ms/epoch - 5ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.00543
16/16 - 0s - loss: 8.2769e-04 - accuracy: 0.0000e+00 - val_loss: 0.0055 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 94ms/epoch - 6ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.00543
16/16 - 0s - loss: 8.2704e-04 - accuracy: 0.0000e+00 - val_loss: 0.0055 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 90ms/epoch - 6ms/step
Epoch 49/500

Epoch 00049: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00049: val_loss did not improve from 0.00543
16/16 - 0s - loss: 8.2650e-04 - accuracy: 0.0000e+00 - val_loss: 0.0055 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 91ms/epoch - 6ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.00543
16/16 - 0s - loss: 8.2600e-04 - accuracy: 0.0000e+00 - val_loss: 0.0055 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 85ms/epoch - 5ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.00543
16/16 - 0s - loss: 8.2552e-04 - accuracy: 0.0000e+00 - val_loss: 0.0055 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 97ms/epoch - 6ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.00543
16/16 - 0s - loss: 8.2502e-04 - accuracy: 0.0000e+00 - val_loss: 0.0055 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 92ms/epoch - 6ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.00543
16/16 - 0s - loss: 8.2452e-04 - accuracy: 0.0000e+00 - val_loss: 0.0055 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 92ms/epoch - 6ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.00543
16/16 - 0s - loss: 8.2402e-04 - accuracy: 0.0000e+00 - val_loss: 0.0055 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 88ms/epoch - 5ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.00543
16/16 - 0s - loss: 8.2351e-04 - accuracy: 0.0000e+00 - val_loss: 0.0055 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 86ms/epoch - 5ms/step
Epoch 56/500

Epoch 00056: val_loss did not improve from 0.00543
16/16 - 0s - loss: 8.2299e-04 - accuracy: 0.0000e+00 - val_loss: 0.0055 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 95ms/epoch - 6ms/step
Epoch 57/500

Epoch 00057: val_loss did not improve from 0.00543
16/16 - 0s - loss: 8.2247e-04 - accuracy: 0.0000e+00 - val_loss: 0.0055 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 94ms/epoch - 6ms/step
Epoch 58/500

Epoch 00058: val_loss did not improve from 0.00543
16/16 - 0s - loss: 8.2195e-04 - accuracy: 0.0000e+00 - val_loss: 0.0055 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 91ms/epoch - 6ms/step
Epoch 59/500

Epoch 00059: val_loss did not improve from 0.00543
16/16 - 0s - loss: 8.2142e-04 - accuracy: 0.0000e+00 - val_loss: 0.0055 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 90ms/epoch - 6ms/step
Epoch 60/500

Epoch 00060: val_loss did not improve from 0.00543
16/16 - 0s - loss: 8.2089e-04 - accuracy: 0.0000e+00 - val_loss: 0.0055 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 86ms/epoch - 5ms/step
Epoch 61/500

Epoch 00061: val_loss did not improve from 0.00543
16/16 - 0s - loss: 8.2035e-04 - accuracy: 0.0000e+00 - val_loss: 0.0055 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 84ms/epoch - 5ms/step
Epoch 62/500

Epoch 00062: val_loss did not improve from 0.00543
16/16 - 0s - loss: 8.1981e-04 - accuracy: 0.0000e+00 - val_loss: 0.0055 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 92ms/epoch - 6ms/step
Epoch 63/500

Epoch 00063: val_loss did not improve from 0.00543
16/16 - 0s - loss: 8.1926e-04 - accuracy: 0.0000e+00 - val_loss: 0.0055 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 87ms/epoch - 5ms/step
Epoch 64/500

Epoch 00064: val_loss did not improve from 0.00543
16/16 - 0s - loss: 8.1871e-04 - accuracy: 0.0000e+00 - val_loss: 0.0055 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 87ms/epoch - 5ms/step
Epoch 65/500

Epoch 00065: val_loss did not improve from 0.00543
16/16 - 0s - loss: 8.1816e-04 - accuracy: 0.0000e+00 - val_loss: 0.0055 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 88ms/epoch - 5ms/step
Epoch 66/500

Epoch 00066: val_loss did not improve from 0.00543
16/16 - 0s - loss: 8.1760e-04 - accuracy: 0.0000e+00 - val_loss: 0.0055 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 88ms/epoch - 6ms/step
Epoch 67/500

Epoch 00067: val_loss did not improve from 0.00543
16/16 - 0s - loss: 8.1704e-04 - accuracy: 0.0000e+00 - val_loss: 0.0055 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 94ms/epoch - 6ms/step
Epoch 68/500

Epoch 00068: val_loss did not improve from 0.00543
16/16 - 0s - loss: 8.1648e-04 - accuracy: 0.0000e+00 - val_loss: 0.0055 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 99ms/epoch - 6ms/step
Epoch 69/500

Epoch 00069: val_loss did not improve from 0.00543
16/16 - 0s - loss: 8.1591e-04 - accuracy: 0.0000e+00 - val_loss: 0.0055 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 91ms/epoch - 6ms/step
Epoch 70/500

Epoch 00070: val_loss did not improve from 0.00543
16/16 - 0s - loss: 8.1533e-04 - accuracy: 0.0000e+00 - val_loss: 0.0055 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 94ms/epoch - 6ms/step
Epoch 71/500

Epoch 00071: val_loss did not improve from 0.00543
16/16 - 0s - loss: 8.1476e-04 - accuracy: 0.0000e+00 - val_loss: 0.0055 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 85ms/epoch - 5ms/step
Epoch 72/500

Epoch 00072: val_loss did not improve from 0.00543
16/16 - 0s - loss: 8.1418e-04 - accuracy: 0.0000e+00 - val_loss: 0.0055 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 86ms/epoch - 5ms/step
Epoch 73/500

Epoch 00073: val_loss did not improve from 0.00543
16/16 - 0s - loss: 8.1360e-04 - accuracy: 0.0000e+00 - val_loss: 0.0055 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 85ms/epoch - 5ms/step
Epoch 74/500

Epoch 00074: val_loss did not improve from 0.00543
16/16 - 0s - loss: 8.1301e-04 - accuracy: 0.0000e+00 - val_loss: 0.0055 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 93ms/epoch - 6ms/step
Epoch 75/500

Epoch 00075: val_loss did not improve from 0.00543
16/16 - 0s - loss: 8.1242e-04 - accuracy: 0.0000e+00 - val_loss: 0.0055 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 91ms/epoch - 6ms/step
Epoch 76/500

Epoch 00076: val_loss did not improve from 0.00543
16/16 - 0s - loss: 8.1183e-04 - accuracy: 0.0000e+00 - val_loss: 0.0055 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 94ms/epoch - 6ms/step
Epoch 77/500

Epoch 00077: val_loss did not improve from 0.00543
16/16 - 0s - loss: 8.1123e-04 - accuracy: 0.0000e+00 - val_loss: 0.0056 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 91ms/epoch - 6ms/step
Epoch 78/500

Epoch 00078: val_loss did not improve from 0.00543
16/16 - 0s - loss: 8.1063e-04 - accuracy: 0.0000e+00 - val_loss: 0.0056 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 99ms/epoch - 6ms/step
Epoch 79/500

Epoch 00079: val_loss did not improve from 0.00543
16/16 - 0s - loss: 8.1003e-04 - accuracy: 0.0000e+00 - val_loss: 0.0056 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 92ms/epoch - 6ms/step
Epoch 80/500

Epoch 00080: val_loss did not improve from 0.00543
16/16 - 0s - loss: 8.0942e-04 - accuracy: 0.0000e+00 - val_loss: 0.0056 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 96ms/epoch - 6ms/step
Epoch 81/500

Epoch 00081: val_loss did not improve from 0.00543
16/16 - 0s - loss: 8.0881e-04 - accuracy: 0.0000e+00 - val_loss: 0.0056 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 91ms/epoch - 6ms/step
Epoch 82/500

Epoch 00082: val_loss did not improve from 0.00543
16/16 - 0s - loss: 8.0820e-04 - accuracy: 0.0000e+00 - val_loss: 0.0056 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 89ms/epoch - 6ms/step
Epoch 83/500

Epoch 00083: val_loss did not improve from 0.00543
16/16 - 0s - loss: 8.0759e-04 - accuracy: 0.0000e+00 - val_loss: 0.0056 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 94ms/epoch - 6ms/step
Epoch 84/500

Epoch 00084: val_loss did not improve from 0.00543
16/16 - 0s - loss: 8.0697e-04 - accuracy: 0.0000e+00 - val_loss: 0.0056 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 95ms/epoch - 6ms/step
Epoch 85/500

Epoch 00085: val_loss did not improve from 0.00543
16/16 - 0s - loss: 8.0635e-04 - accuracy: 0.0000e+00 - val_loss: 0.0056 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 86ms/epoch - 5ms/step
Epoch 86/500

Epoch 00086: val_loss did not improve from 0.00543
16/16 - 0s - loss: 8.0573e-04 - accuracy: 0.0000e+00 - val_loss: 0.0056 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 86ms/epoch - 5ms/step
Epoch 87/500

Epoch 00087: val_loss did not improve from 0.00543
16/16 - 0s - loss: 8.0510e-04 - accuracy: 0.0000e+00 - val_loss: 0.0056 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 88ms/epoch - 5ms/step
Epoch 88/500

Epoch 00088: val_loss did not improve from 0.00543
16/16 - 0s - loss: 8.0448e-04 - accuracy: 0.0000e+00 - val_loss: 0.0056 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 88ms/epoch - 6ms/step
Epoch 89/500

Epoch 00089: val_loss did not improve from 0.00543
16/16 - 0s - loss: 8.0384e-04 - accuracy: 0.0000e+00 - val_loss: 0.0056 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 97ms/epoch - 6ms/step
Epoch 90/500

Epoch 00090: val_loss did not improve from 0.00543
16/16 - 0s - loss: 8.0321e-04 - accuracy: 0.0000e+00 - val_loss: 0.0056 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 93ms/epoch - 6ms/step
Epoch 91/500

Epoch 00091: val_loss did not improve from 0.00543
16/16 - 0s - loss: 8.0258e-04 - accuracy: 0.0000e+00 - val_loss: 0.0056 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 96ms/epoch - 6ms/step
Epoch 92/500

Epoch 00092: val_loss did not improve from 0.00543
16/16 - 0s - loss: 8.0194e-04 - accuracy: 0.0000e+00 - val_loss: 0.0056 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 85ms/epoch - 5ms/step
Epoch 93/500

Epoch 00093: val_loss did not improve from 0.00543
16/16 - 0s - loss: 8.0130e-04 - accuracy: 0.0000e+00 - val_loss: 0.0056 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 85ms/epoch - 5ms/step
Epoch 00093: early stopping
SMA
Prediction vs Close:		53.73% Accuracy
Prediction vs Prediction:	47.01% Accuracy
MSE:	 71.66891031373025 
RMSE:	 8.465749247038342 
MAPE:	 6.880610177712922

EMA
Prediction vs Close:		54.85% Accuracy
Prediction vs Prediction:	43.66% Accuracy
MSE:	 67.24711347278334 
RMSE:	 8.200433736869249 
MAPE:	 6.781803215137433
WMA
WMA([input_arrays], [timeperiod=30])

Weighted Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
49

Working on WMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-14480.432, Time=8.68 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-15747.905, Time=6.24 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-15116.389, Time=6.89 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-13532.115, Time=7.94 sec
 ARIMA(0,3,0)(0,0,0)[0] intercept   : AIC=-13619.624, Time=5.24 sec

Best model:  ARIMA(0,3,0)(0,0,0)[0]          
Total fit time: 35.004 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(0, 3, 0)   Log Likelihood                7895.952
Date:                Sun, 12 Dec 2021   AIC                         -15747.905
Time:                        16:48:05   BIC                         -15644.706
Sample:                             0   HQIC                        -15708.272
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1          3.384e-05    1.9e-05      1.778      0.075   -3.47e-06    7.12e-05
x2          3.379e-05   1.84e-05      1.832      0.067   -2.35e-06    6.99e-05
x3          3.388e-05   4.34e-05      0.781      0.435   -5.12e-05       0.000
x4             1.0000   4.12e-06   2.43e+05      0.000       1.000       1.000
x5          3.227e-05   3.52e-06      9.163      0.000    2.54e-05    3.92e-05
x6          5.559e-05   6.75e-05      0.823      0.410   -7.67e-05       0.000
x7          3.369e-05   2.38e-05      1.415      0.157    -1.3e-05    8.03e-05
x8             0.0023    2.6e-05     86.661      0.000       0.002       0.002
x9          -8.72e-06   7.51e-07    -11.610      0.000   -1.02e-05   -7.25e-06
x10           -0.0023   3.33e-05    -67.770      0.000      -0.002      -0.002
x11            0.0093    2.8e-05    333.459      0.000       0.009       0.009
x12           -0.0118   2.37e-05   -498.171      0.000      -0.012      -0.012
x13         3.382e-05   1.49e-05      2.273      0.023    4.66e-06     6.3e-05
x14         9.271e-05   6.21e-05      1.493      0.135    -2.9e-05       0.000
x15         3.096e-05   1.92e-05      1.614      0.106   -6.63e-06    6.86e-05
x16          5.52e-05   7.17e-05      0.770      0.441   -8.53e-05       0.000
x17          3.38e-05    3.2e-05      1.056      0.291   -2.89e-05    9.65e-05
x18        -6.715e-06   8.34e-05     -0.081      0.936      -0.000       0.000
x19         3.428e-05   2.07e-05      1.654      0.098   -6.34e-06    7.49e-05
x20        -8.089e-06   9.55e-05     -0.085      0.933      -0.000       0.000
x21         4.255e-05      0.000      0.094      0.925      -0.001       0.001
sigma2      2.581e-10   7.87e-11      3.280      0.001    1.04e-10    4.12e-10
===================================================================================
Ljung-Box (L1) (Q):                 362.92   Jarque-Bera (JB):           5047564.68
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.01   Skew:                           -11.23
Prob(H) (two-sided):                  0.00   Kurtosis:                       390.28
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 1.75e+20. Standard errors may be unstable.
ARIMA order: (0, 3, 0) 

/usr/local/lib/python3.7/dist-packages/keras/optimizer_v2/adam.py:105: UserWarning: The `lr` argument is deprecated, use `learning_rate` instead.
  super(Adam, self).__init__(name, **kwargs)
Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.15122, saving model to LSTM6.h5
17/17 - 4s - loss: 0.1166 - accuracy: 0.0000e+00 - val_loss: 0.1512 - val_accuracy: 0.0000e+00 - lr: 0.0010 - 4s/epoch - 210ms/step
Epoch 2/500

Epoch 00002: val_loss improved from 0.15122 to 0.08149, saving model to LSTM6.h5
17/17 - 0s - loss: 0.0463 - accuracy: 0.0000e+00 - val_loss: 0.0815 - val_accuracy: 0.0000e+00 - lr: 0.0010 - 124ms/epoch - 7ms/step
Epoch 3/500

Epoch 00003: val_loss improved from 0.08149 to 0.00911, saving model to LSTM6.h5
17/17 - 0s - loss: 0.0137 - accuracy: 0.0000e+00 - val_loss: 0.0091 - val_accuracy: 0.0037 - lr: 0.0010 - 129ms/epoch - 8ms/step
Epoch 4/500

Epoch 00004: val_loss improved from 0.00911 to 0.00792, saving model to LSTM6.h5
17/17 - 0s - loss: 0.0043 - accuracy: 0.0000e+00 - val_loss: 0.0079 - val_accuracy: 0.0037 - lr: 0.0010 - 113ms/epoch - 7ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.00792
17/17 - 0s - loss: 0.0047 - accuracy: 0.0000e+00 - val_loss: 0.0121 - val_accuracy: 0.0037 - lr: 0.0010 - 94ms/epoch - 6ms/step
Epoch 6/500

Epoch 00006: val_loss did not improve from 0.00792
17/17 - 0s - loss: 0.0019 - accuracy: 0.0000e+00 - val_loss: 0.0160 - val_accuracy: 0.0037 - lr: 0.0010 - 99ms/epoch - 6ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.00792
17/17 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0155 - val_accuracy: 0.0037 - lr: 0.0010 - 90ms/epoch - 5ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.00792
17/17 - 0s - loss: 9.8841e-04 - accuracy: 0.0000e+00 - val_loss: 0.0143 - val_accuracy: 0.0037 - lr: 0.0010 - 94ms/epoch - 6ms/step
Epoch 9/500

Epoch 00009: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00009: val_loss did not improve from 0.00792
17/17 - 0s - loss: 9.5937e-04 - accuracy: 0.0000e+00 - val_loss: 0.0142 - val_accuracy: 0.0037 - lr: 0.0010 - 94ms/epoch - 6ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.00792
17/17 - 0s - loss: 9.3381e-04 - accuracy: 0.0000e+00 - val_loss: 0.0137 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 91ms/epoch - 5ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.00792
17/17 - 0s - loss: 9.1282e-04 - accuracy: 0.0000e+00 - val_loss: 0.0138 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 99ms/epoch - 6ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.00792
17/17 - 0s - loss: 9.1136e-04 - accuracy: 0.0000e+00 - val_loss: 0.0141 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 106ms/epoch - 6ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.00792
17/17 - 0s - loss: 9.0814e-04 - accuracy: 0.0000e+00 - val_loss: 0.0142 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 104ms/epoch - 6ms/step
Epoch 14/500

Epoch 00014: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00014: val_loss did not improve from 0.00792
17/17 - 0s - loss: 9.0537e-04 - accuracy: 0.0000e+00 - val_loss: 0.0143 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 99ms/epoch - 6ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.00792
17/17 - 0s - loss: 9.0338e-04 - accuracy: 0.0000e+00 - val_loss: 0.0143 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 94ms/epoch - 6ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.00792
17/17 - 0s - loss: 9.0311e-04 - accuracy: 0.0000e+00 - val_loss: 0.0144 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 93ms/epoch - 5ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.00792
17/17 - 0s - loss: 9.0283e-04 - accuracy: 0.0000e+00 - val_loss: 0.0144 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 95ms/epoch - 6ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.00792
17/17 - 0s - loss: 9.0256e-04 - accuracy: 0.0000e+00 - val_loss: 0.0144 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 94ms/epoch - 6ms/step
Epoch 19/500

Epoch 00019: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00019: val_loss did not improve from 0.00792
17/17 - 0s - loss: 9.0228e-04 - accuracy: 0.0000e+00 - val_loss: 0.0144 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 90ms/epoch - 5ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.00792
17/17 - 0s - loss: 9.0200e-04 - accuracy: 0.0000e+00 - val_loss: 0.0144 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 94ms/epoch - 6ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.00792
17/17 - 0s - loss: 9.0171e-04 - accuracy: 0.0000e+00 - val_loss: 0.0144 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 105ms/epoch - 6ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.00792
17/17 - 0s - loss: 9.0142e-04 - accuracy: 0.0000e+00 - val_loss: 0.0144 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 98ms/epoch - 6ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.00792
17/17 - 0s - loss: 9.0111e-04 - accuracy: 0.0000e+00 - val_loss: 0.0145 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 102ms/epoch - 6ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.00792
17/17 - 0s - loss: 9.0081e-04 - accuracy: 0.0000e+00 - val_loss: 0.0145 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 102ms/epoch - 6ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.00792
17/17 - 0s - loss: 9.0049e-04 - accuracy: 0.0000e+00 - val_loss: 0.0145 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 101ms/epoch - 6ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.00792
17/17 - 0s - loss: 9.0017e-04 - accuracy: 0.0000e+00 - val_loss: 0.0145 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 86ms/epoch - 5ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.00792
17/17 - 0s - loss: 8.9984e-04 - accuracy: 0.0000e+00 - val_loss: 0.0145 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 92ms/epoch - 5ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.00792
17/17 - 0s - loss: 8.9950e-04 - accuracy: 0.0000e+00 - val_loss: 0.0145 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 89ms/epoch - 5ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.00792
17/17 - 0s - loss: 8.9916e-04 - accuracy: 0.0000e+00 - val_loss: 0.0145 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 98ms/epoch - 6ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.00792
17/17 - 0s - loss: 8.9881e-04 - accuracy: 0.0000e+00 - val_loss: 0.0145 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 97ms/epoch - 6ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.00792
17/17 - 0s - loss: 8.9845e-04 - accuracy: 0.0000e+00 - val_loss: 0.0145 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 104ms/epoch - 6ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.00792
17/17 - 0s - loss: 8.9809e-04 - accuracy: 0.0000e+00 - val_loss: 0.0145 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 103ms/epoch - 6ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.00792
17/17 - 0s - loss: 8.9772e-04 - accuracy: 0.0000e+00 - val_loss: 0.0146 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 99ms/epoch - 6ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.00792
17/17 - 0s - loss: 8.9735e-04 - accuracy: 0.0000e+00 - val_loss: 0.0146 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 102ms/epoch - 6ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.00792
17/17 - 0s - loss: 8.9697e-04 - accuracy: 0.0000e+00 - val_loss: 0.0146 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 90ms/epoch - 5ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.00792
17/17 - 0s - loss: 8.9658e-04 - accuracy: 0.0000e+00 - val_loss: 0.0146 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 90ms/epoch - 5ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.00792
17/17 - 0s - loss: 8.9619e-04 - accuracy: 0.0000e+00 - val_loss: 0.0146 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 102ms/epoch - 6ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.00792
17/17 - 0s - loss: 8.9579e-04 - accuracy: 0.0000e+00 - val_loss: 0.0146 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 100ms/epoch - 6ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.00792
17/17 - 0s - loss: 8.9539e-04 - accuracy: 0.0000e+00 - val_loss: 0.0146 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 99ms/epoch - 6ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.00792
17/17 - 0s - loss: 8.9498e-04 - accuracy: 0.0000e+00 - val_loss: 0.0146 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 103ms/epoch - 6ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.00792
17/17 - 0s - loss: 8.9457e-04 - accuracy: 0.0000e+00 - val_loss: 0.0146 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 102ms/epoch - 6ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.00792
17/17 - 0s - loss: 8.9415e-04 - accuracy: 0.0000e+00 - val_loss: 0.0146 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 97ms/epoch - 6ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.00792
17/17 - 0s - loss: 8.9373e-04 - accuracy: 0.0000e+00 - val_loss: 0.0147 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 97ms/epoch - 6ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.00792
17/17 - 0s - loss: 8.9330e-04 - accuracy: 0.0000e+00 - val_loss: 0.0147 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 92ms/epoch - 5ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.00792
17/17 - 0s - loss: 8.9286e-04 - accuracy: 0.0000e+00 - val_loss: 0.0147 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 92ms/epoch - 5ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.00792
17/17 - 0s - loss: 8.9242e-04 - accuracy: 0.0000e+00 - val_loss: 0.0147 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 92ms/epoch - 5ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.00792
17/17 - 0s - loss: 8.9197e-04 - accuracy: 0.0000e+00 - val_loss: 0.0147 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 99ms/epoch - 6ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.00792
17/17 - 0s - loss: 8.9152e-04 - accuracy: 0.0000e+00 - val_loss: 0.0147 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 96ms/epoch - 6ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.00792
17/17 - 0s - loss: 8.9107e-04 - accuracy: 0.0000e+00 - val_loss: 0.0147 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 94ms/epoch - 6ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.00792
17/17 - 0s - loss: 8.9061e-04 - accuracy: 0.0000e+00 - val_loss: 0.0147 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 106ms/epoch - 6ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.00792
17/17 - 0s - loss: 8.9014e-04 - accuracy: 0.0000e+00 - val_loss: 0.0147 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 100ms/epoch - 6ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.00792
17/17 - 0s - loss: 8.8967e-04 - accuracy: 0.0000e+00 - val_loss: 0.0147 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 94ms/epoch - 6ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.00792
17/17 - 0s - loss: 8.8919e-04 - accuracy: 0.0000e+00 - val_loss: 0.0147 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 102ms/epoch - 6ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.00792
17/17 - 0s - loss: 8.8871e-04 - accuracy: 0.0000e+00 - val_loss: 0.0148 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 102ms/epoch - 6ms/step
Epoch 00054: early stopping
SMA
Prediction vs Close:		53.73% Accuracy
Prediction vs Prediction:	47.01% Accuracy
MSE:	 71.66891031373025 
RMSE:	 8.465749247038342 
MAPE:	 6.880610177712922

EMA
Prediction vs Close:		54.85% Accuracy
Prediction vs Prediction:	43.66% Accuracy
MSE:	 67.24711347278334 
RMSE:	 8.200433736869249 
MAPE:	 6.781803215137433

WMA
Prediction vs Close:		55.6% Accuracy
Prediction vs Prediction:	47.76% Accuracy
MSE:	 58.550384022767716 
RMSE:	 7.6518222681115455 
MAPE:	 6.1413991074844
DEMA
DEMA([input_arrays], [timeperiod=30])

Double Exponential Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
89

Working on DEMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-17005.774, Time=2.34 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-14574.593, Time=4.01 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-15590.302, Time=6.94 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-14572.593, Time=5.65 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=-15269.503, Time=7.14 sec
 ARIMA(1,3,2)(0,0,0)[0]             : AIC=-16414.961, Time=8.11 sec
 ARIMA(0,3,2)(0,0,0)[0]             : AIC=-16878.396, Time=9.36 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=-17030.019, Time=3.97 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=-17004.613, Time=2.81 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=-17087.441, Time=5.86 sec
 ARIMA(3,3,2)(0,0,0)[0]             : AIC=inf, Time=14.61 sec
/usr/local/lib/python3.7/dist-packages/statsmodels/tsa/statespace/sarimax.py:1906: RuntimeWarning: divide by zero encountered in reciprocal
  return np.roots(self.polynomial_reduced_ma)**-1
 ARIMA(2,3,2)(0,0,0)[0]             : AIC=-17003.985, Time=3.09 sec
 ARIMA(3,3,1)(0,0,0)[0] intercept   : AIC=-16998.665, Time=3.65 sec

Best model:  ARIMA(3,3,1)(0,0,0)[0]          
Total fit time: 77.572 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 1)   Log Likelihood                8569.721
Date:                Sun, 12 Dec 2021   AIC                         -17087.441
Time:                        16:49:55   BIC                         -16965.479
Sample:                             0   HQIC                        -17040.603
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1         -2.799e-10   1.43e-20  -1.96e+10      0.000    -2.8e-10    -2.8e-10
x2         -2.817e-10   1.43e-20  -1.97e+10      0.000   -2.82e-10   -2.82e-10
x3         -2.805e-10   1.43e-20  -1.96e+10      0.000    -2.8e-10    -2.8e-10
x4             1.0000   1.43e-20      7e+19      0.000       1.000       1.000
x5         -2.597e-10   1.37e-20  -1.89e+10      0.000    -2.6e-10    -2.6e-10
x6         -1.388e-09   3.12e-20  -4.45e+10      0.000   -1.39e-09   -1.39e-09
x7         -2.789e-10   1.42e-20  -1.96e+10      0.000   -2.79e-10   -2.79e-10
x8          -2.76e-10   1.42e-20  -1.95e+10      0.000   -2.76e-10   -2.76e-10
x9         -2.216e-12   3.53e-22  -6.28e+09      0.000   -2.22e-12   -2.22e-12
x10        -1.345e-10   9.82e-21  -1.37e+10      0.000   -1.34e-10   -1.34e-10
x11        -2.898e-10   1.45e-20     -2e+10      0.000    -2.9e-10    -2.9e-10
x12        -2.602e-10   1.38e-20  -1.89e+10      0.000    -2.6e-10    -2.6e-10
x13        -2.807e-10   1.43e-20  -1.96e+10      0.000   -2.81e-10   -2.81e-10
x14         -1.87e-09   3.69e-20  -5.07e+10      0.000   -1.87e-09   -1.87e-09
x15        -2.726e-10   1.43e-20   -1.9e+10      0.000   -2.73e-10   -2.73e-10
x16        -7.915e-11   7.68e-21  -1.03e+10      0.000   -7.92e-11   -7.92e-11
x17        -2.606e-10   1.33e-20  -1.96e+10      0.000   -2.61e-10   -2.61e-10
x18        -6.408e-10   2.16e-20  -2.97e+10      0.000   -6.41e-10   -6.41e-10
x19        -2.881e-10   1.46e-20  -1.98e+10      0.000   -2.88e-10   -2.88e-10
x20        -4.337e-10   1.78e-20  -2.44e+10      0.000   -4.34e-10   -4.34e-10
x21        -4.549e-10   1.79e-20  -2.55e+10      0.000   -4.55e-10   -4.55e-10
ar.L1         -0.4923   1.46e-22  -3.38e+21      0.000      -0.492      -0.492
ar.L2         -0.1923   8.47e-23  -2.27e+21      0.000      -0.192      -0.192
ar.L3         -0.0461   4.02e-23  -1.15e+21      0.000      -0.046      -0.046
ma.L1         -0.7078   3.31e-22  -2.14e+21      0.000      -0.708      -0.708
sigma2       8.99e-11   6.95e-11      1.293      0.196   -4.64e-11    2.26e-10
===================================================================================
Ljung-Box (L1) (Q):                  55.07   Jarque-Bera (JB):           4171695.82
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                             5.26
Prob(H) (two-sided):                  0.00   Kurtosis:                       355.51
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 2.62e+41. Standard errors may be unstable.
ARIMA order: (3, 3, 1) 

/usr/local/lib/python3.7/dist-packages/keras/optimizer_v2/adam.py:105: UserWarning: The `lr` argument is deprecated, use `learning_rate` instead.
  super(Adam, self).__init__(name, **kwargs)
Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.06618, saving model to LSTM6.h5
10/10 - 4s - loss: 0.2504 - accuracy: 0.0000e+00 - val_loss: 0.0662 - val_accuracy: 0.0000e+00 - lr: 0.0010 - 4s/epoch - 380ms/step
Epoch 2/500

Epoch 00002: val_loss improved from 0.06618 to 0.03100, saving model to LSTM6.h5
10/10 - 0s - loss: 0.0628 - accuracy: 0.0000e+00 - val_loss: 0.0310 - val_accuracy: 0.0037 - lr: 0.0010 - 91ms/epoch - 9ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.03100
10/10 - 0s - loss: 0.0124 - accuracy: 0.0000e+00 - val_loss: 0.0482 - val_accuracy: 0.0037 - lr: 0.0010 - 63ms/epoch - 6ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.03100
10/10 - 0s - loss: 0.0035 - accuracy: 0.0000e+00 - val_loss: 0.0460 - val_accuracy: 0.0037 - lr: 0.0010 - 66ms/epoch - 7ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.03100
10/10 - 0s - loss: 0.0018 - accuracy: 0.0000e+00 - val_loss: 0.0350 - val_accuracy: 0.0037 - lr: 0.0010 - 62ms/epoch - 6ms/step
Epoch 6/500

Epoch 00006: val_loss improved from 0.03100 to 0.02818, saving model to LSTM6.h5
10/10 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0282 - val_accuracy: 0.0037 - lr: 0.0010 - 86ms/epoch - 9ms/step
Epoch 7/500

Epoch 00007: val_loss improved from 0.02818 to 0.02201, saving model to LSTM6.h5
10/10 - 0s - loss: 9.9207e-04 - accuracy: 0.0000e+00 - val_loss: 0.0220 - val_accuracy: 0.0037 - lr: 0.0010 - 77ms/epoch - 8ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.02201
10/10 - 0s - loss: 9.8483e-04 - accuracy: 0.0000e+00 - val_loss: 0.0234 - val_accuracy: 0.0037 - lr: 0.0010 - 59ms/epoch - 6ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.02201
10/10 - 0s - loss: 9.3177e-04 - accuracy: 0.0000e+00 - val_loss: 0.0234 - val_accuracy: 0.0037 - lr: 0.0010 - 63ms/epoch - 6ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.02201
10/10 - 0s - loss: 9.8743e-04 - accuracy: 0.0000e+00 - val_loss: 0.0261 - val_accuracy: 0.0037 - lr: 0.0010 - 64ms/epoch - 6ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.02201
10/10 - 0s - loss: 9.0889e-04 - accuracy: 0.0000e+00 - val_loss: 0.0251 - val_accuracy: 0.0037 - lr: 0.0010 - 67ms/epoch - 7ms/step
Epoch 12/500

Epoch 00012: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00012: val_loss did not improve from 0.02201
10/10 - 0s - loss: 9.3831e-04 - accuracy: 0.0000e+00 - val_loss: 0.0258 - val_accuracy: 0.0037 - lr: 0.0010 - 69ms/epoch - 7ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.02201
10/10 - 0s - loss: 8.8311e-04 - accuracy: 0.0000e+00 - val_loss: 0.0258 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 68ms/epoch - 7ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.02201
10/10 - 0s - loss: 8.8204e-04 - accuracy: 0.0000e+00 - val_loss: 0.0259 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 60ms/epoch - 6ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.02201
10/10 - 0s - loss: 8.8051e-04 - accuracy: 0.0000e+00 - val_loss: 0.0259 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 68ms/epoch - 7ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.02201
10/10 - 0s - loss: 8.7889e-04 - accuracy: 0.0000e+00 - val_loss: 0.0259 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 63ms/epoch - 6ms/step
Epoch 17/500

Epoch 00017: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00017: val_loss did not improve from 0.02201
10/10 - 0s - loss: 8.7749e-04 - accuracy: 0.0000e+00 - val_loss: 0.0259 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 65ms/epoch - 6ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.02201
10/10 - 0s - loss: 8.7558e-04 - accuracy: 0.0000e+00 - val_loss: 0.0259 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 62ms/epoch - 6ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.02201
10/10 - 0s - loss: 8.7549e-04 - accuracy: 0.0000e+00 - val_loss: 0.0259 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 60ms/epoch - 6ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.02201
10/10 - 0s - loss: 8.7538e-04 - accuracy: 0.0000e+00 - val_loss: 0.0259 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 64ms/epoch - 6ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.02201
10/10 - 0s - loss: 8.7526e-04 - accuracy: 0.0000e+00 - val_loss: 0.0259 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 70ms/epoch - 7ms/step
Epoch 22/500

Epoch 00022: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00022: val_loss did not improve from 0.02201
10/10 - 0s - loss: 8.7513e-04 - accuracy: 0.0000e+00 - val_loss: 0.0259 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 65ms/epoch - 6ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.02201
10/10 - 0s - loss: 8.7501e-04 - accuracy: 0.0000e+00 - val_loss: 0.0259 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 66ms/epoch - 7ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.02201
10/10 - 0s - loss: 8.7488e-04 - accuracy: 0.0000e+00 - val_loss: 0.0259 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 70ms/epoch - 7ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.02201
10/10 - 0s - loss: 8.7474e-04 - accuracy: 0.0000e+00 - val_loss: 0.0259 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 66ms/epoch - 7ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.02201
10/10 - 0s - loss: 8.7460e-04 - accuracy: 0.0000e+00 - val_loss: 0.0259 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 66ms/epoch - 7ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.02201
10/10 - 0s - loss: 8.7446e-04 - accuracy: 0.0000e+00 - val_loss: 0.0259 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 74ms/epoch - 7ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.02201
10/10 - 0s - loss: 8.7432e-04 - accuracy: 0.0000e+00 - val_loss: 0.0259 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 67ms/epoch - 7ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.02201
10/10 - 0s - loss: 8.7417e-04 - accuracy: 0.0000e+00 - val_loss: 0.0259 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 68ms/epoch - 7ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.02201
10/10 - 0s - loss: 8.7402e-04 - accuracy: 0.0000e+00 - val_loss: 0.0259 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 70ms/epoch - 7ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.02201
10/10 - 0s - loss: 8.7387e-04 - accuracy: 0.0000e+00 - val_loss: 0.0259 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 63ms/epoch - 6ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.02201
10/10 - 0s - loss: 8.7371e-04 - accuracy: 0.0000e+00 - val_loss: 0.0259 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 64ms/epoch - 6ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.02201
10/10 - 0s - loss: 8.7356e-04 - accuracy: 0.0000e+00 - val_loss: 0.0259 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 71ms/epoch - 7ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.02201
10/10 - 0s - loss: 8.7340e-04 - accuracy: 0.0000e+00 - val_loss: 0.0259 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 68ms/epoch - 7ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.02201
10/10 - 0s - loss: 8.7323e-04 - accuracy: 0.0000e+00 - val_loss: 0.0259 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 66ms/epoch - 7ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.02201
10/10 - 0s - loss: 8.7307e-04 - accuracy: 0.0000e+00 - val_loss: 0.0259 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 65ms/epoch - 7ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.02201
10/10 - 0s - loss: 8.7290e-04 - accuracy: 0.0000e+00 - val_loss: 0.0259 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 65ms/epoch - 7ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.02201
10/10 - 0s - loss: 8.7273e-04 - accuracy: 0.0000e+00 - val_loss: 0.0259 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 61ms/epoch - 6ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.02201
10/10 - 0s - loss: 8.7256e-04 - accuracy: 0.0000e+00 - val_loss: 0.0259 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 61ms/epoch - 6ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.02201
10/10 - 0s - loss: 8.7238e-04 - accuracy: 0.0000e+00 - val_loss: 0.0259 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 61ms/epoch - 6ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.02201
10/10 - 0s - loss: 8.7221e-04 - accuracy: 0.0000e+00 - val_loss: 0.0259 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 64ms/epoch - 6ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.02201
10/10 - 0s - loss: 8.7203e-04 - accuracy: 0.0000e+00 - val_loss: 0.0259 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 63ms/epoch - 6ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.02201
10/10 - 0s - loss: 8.7185e-04 - accuracy: 0.0000e+00 - val_loss: 0.0259 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 62ms/epoch - 6ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.02201
10/10 - 0s - loss: 8.7167e-04 - accuracy: 0.0000e+00 - val_loss: 0.0259 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 62ms/epoch - 6ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.02201
10/10 - 0s - loss: 8.7149e-04 - accuracy: 0.0000e+00 - val_loss: 0.0259 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 65ms/epoch - 6ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.02201
10/10 - 0s - loss: 8.7130e-04 - accuracy: 0.0000e+00 - val_loss: 0.0259 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 67ms/epoch - 7ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.02201
10/10 - 0s - loss: 8.7111e-04 - accuracy: 0.0000e+00 - val_loss: 0.0259 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 68ms/epoch - 7ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.02201
10/10 - 0s - loss: 8.7092e-04 - accuracy: 0.0000e+00 - val_loss: 0.0259 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 72ms/epoch - 7ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.02201
10/10 - 0s - loss: 8.7073e-04 - accuracy: 0.0000e+00 - val_loss: 0.0259 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 66ms/epoch - 7ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.02201
10/10 - 0s - loss: 8.7054e-04 - accuracy: 0.0000e+00 - val_loss: 0.0259 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 68ms/epoch - 7ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.02201
10/10 - 0s - loss: 8.7035e-04 - accuracy: 0.0000e+00 - val_loss: 0.0259 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 59ms/epoch - 6ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.02201
10/10 - 0s - loss: 8.7015e-04 - accuracy: 0.0000e+00 - val_loss: 0.0259 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 73ms/epoch - 7ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.02201
10/10 - 0s - loss: 8.6995e-04 - accuracy: 0.0000e+00 - val_loss: 0.0259 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 61ms/epoch - 6ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.02201
10/10 - 0s - loss: 8.6975e-04 - accuracy: 0.0000e+00 - val_loss: 0.0259 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 69ms/epoch - 7ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.02201
10/10 - 0s - loss: 8.6955e-04 - accuracy: 0.0000e+00 - val_loss: 0.0260 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 64ms/epoch - 6ms/step
Epoch 56/500

Epoch 00056: val_loss did not improve from 0.02201
10/10 - 0s - loss: 8.6935e-04 - accuracy: 0.0000e+00 - val_loss: 0.0260 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 68ms/epoch - 7ms/step
Epoch 57/500

Epoch 00057: val_loss did not improve from 0.02201
10/10 - 0s - loss: 8.6915e-04 - accuracy: 0.0000e+00 - val_loss: 0.0260 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 61ms/epoch - 6ms/step
Epoch 00057: early stopping
SMA
Prediction vs Close:		53.73% Accuracy
Prediction vs Prediction:	47.01% Accuracy
MSE:	 71.66891031373025 
RMSE:	 8.465749247038342 
MAPE:	 6.880610177712922

EMA
Prediction vs Close:		54.85% Accuracy
Prediction vs Prediction:	43.66% Accuracy
MSE:	 67.24711347278334 
RMSE:	 8.200433736869249 
MAPE:	 6.781803215137433

WMA
Prediction vs Close:		55.6% Accuracy
Prediction vs Prediction:	47.76% Accuracy
MSE:	 58.550384022767716 
RMSE:	 7.6518222681115455 
MAPE:	 6.1413991074844

DEMA
Prediction vs Close:		52.24% Accuracy
Prediction vs Prediction:	47.01% Accuracy
MSE:	 127.72881036471918 
RMSE:	 11.301717142307146 
MAPE:	 10.306940424406019
KAMA
KAMA([input_arrays], [timeperiod=30])

Kaufman Adaptive Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
18

Working on KAMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-17005.902, Time=2.09 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-14574.593, Time=3.81 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-16796.316, Time=7.94 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-14572.593, Time=5.24 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=-17004.193, Time=2.14 sec
 ARIMA(1,3,2)(0,0,0)[0]             : AIC=-15176.063, Time=9.59 sec
 ARIMA(0,3,2)(0,0,0)[0]             : AIC=-16873.638, Time=9.98 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=-17008.756, Time=2.44 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=-17004.764, Time=3.39 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=-15723.849, Time=12.69 sec
 ARIMA(2,3,0)(0,0,0)[0] intercept   : AIC=-17006.756, Time=2.45 sec

Best model:  ARIMA(2,3,0)(0,0,0)[0]          
Total fit time: 61.772 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(2, 3, 0)   Log Likelihood                8528.378
Date:                Sun, 12 Dec 2021   AIC                         -17008.756
Time:                        16:58:40   BIC                         -16896.176
Sample:                             0   HQIC                        -16965.520
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1          -2.24e-15   7.41e-26  -3.02e+10      0.000   -2.24e-15   -2.24e-15
x2          8.461e-16    6.6e-26   1.28e+10      0.000    8.46e-16    8.46e-16
x3          4.901e-16   6.89e-26   7.11e+09      0.000     4.9e-16     4.9e-16
x4             1.0000   6.96e-26   1.44e+25      0.000       1.000       1.000
x5          5.931e-15   6.61e-26   8.97e+10      0.000    5.93e-15    5.93e-15
x6          -1.05e-15    1.5e-25     -7e+09      0.000   -1.05e-15   -1.05e-15
x7          1.439e-15   6.87e-26    2.1e+10      0.000    1.44e-15    1.44e-15
x8          -1.25e-15    6.8e-26  -1.84e+10      0.000   -1.25e-15   -1.25e-15
x9         -9.356e-17   8.97e-27  -1.04e+10      0.000   -9.36e-17   -9.36e-17
x10        -1.145e-16   2.88e-26  -3.98e+09      0.000   -1.15e-16   -1.15e-16
x11        -2.036e-16    6.8e-26     -3e+09      0.000   -2.04e-16   -2.04e-16
x12         5.951e-16   6.76e-26   8.81e+09      0.000    5.95e-16    5.95e-16
x13        -6.117e-17   6.94e-26  -8.81e+08      0.000   -6.12e-17   -6.12e-17
x14         1.167e-15   1.99e-25   5.85e+09      0.000    1.17e-15    1.17e-15
x15        -4.274e-14   6.99e-26  -6.11e+11      0.000   -4.27e-14   -4.27e-14
x16         2.262e-14   8.56e-26   2.64e+11      0.000    2.26e-14    2.26e-14
x17         3.384e-14   6.46e-26   5.24e+11      0.000    3.38e-14    3.38e-14
x18         9.894e-16    5.8e-26   1.71e+10      0.000    9.89e-16    9.89e-16
x19         4.115e-14   7.75e-26   5.31e+11      0.000    4.12e-14    4.12e-14
x20        -2.176e-15   9.49e-26  -2.29e+10      0.000   -2.18e-15   -2.18e-15
x21        -7.755e-17   4.63e-26  -1.67e+09      0.000   -7.75e-17   -7.75e-17
ar.L1         -0.9988   9.76e-22  -1.02e+21      0.000      -0.999      -0.999
ar.L2         -0.4972   4.07e-23  -1.22e+22      0.000      -0.497      -0.497
sigma2          1e-10   6.99e-11      1.432      0.152   -3.69e-11    2.37e-10
===================================================================================
Ljung-Box (L1) (Q):                  31.54   Jarque-Bera (JB):           2432532.03
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                            -0.15
Prob(H) (two-sided):                  0.00   Kurtosis:                       272.30
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 7.19e+48. Standard errors may be unstable.
ARIMA order: (2, 3, 0) 

/usr/local/lib/python3.7/dist-packages/keras/optimizer_v2/adam.py:105: UserWarning: The `lr` argument is deprecated, use `learning_rate` instead.
  super(Adam, self).__init__(name, **kwargs)
Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.04322, saving model to LSTM6.h5
45/45 - 4s - loss: 0.1157 - accuracy: 0.0000e+00 - val_loss: 0.0432 - val_accuracy: 0.0037 - lr: 0.0010 - 4s/epoch - 88ms/step
Epoch 2/500

Epoch 00002: val_loss improved from 0.04322 to 0.02118, saving model to LSTM6.h5
45/45 - 0s - loss: 0.0138 - accuracy: 0.0000e+00 - val_loss: 0.0212 - val_accuracy: 0.0037 - lr: 0.0010 - 223ms/epoch - 5ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.02118
45/45 - 0s - loss: 0.0091 - accuracy: 0.0000e+00 - val_loss: 0.0243 - val_accuracy: 0.0037 - lr: 0.0010 - 220ms/epoch - 5ms/step
Epoch 4/500

Epoch 00004: val_loss improved from 0.02118 to 0.00757, saving model to LSTM6.h5
45/45 - 0s - loss: 0.0124 - accuracy: 0.0000e+00 - val_loss: 0.0076 - val_accuracy: 0.0037 - lr: 0.0010 - 249ms/epoch - 6ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.00757
45/45 - 0s - loss: 0.0094 - accuracy: 0.0000e+00 - val_loss: 0.0082 - val_accuracy: 0.0037 - lr: 0.0010 - 214ms/epoch - 5ms/step
Epoch 6/500

Epoch 00006: val_loss improved from 0.00757 to 0.00671, saving model to LSTM6.h5
45/45 - 0s - loss: 0.0145 - accuracy: 0.0000e+00 - val_loss: 0.0067 - val_accuracy: 0.0037 - lr: 0.0010 - 232ms/epoch - 5ms/step
Epoch 7/500

Epoch 00007: val_loss improved from 0.00671 to 0.00595, saving model to LSTM6.h5
45/45 - 0s - loss: 0.0087 - accuracy: 0.0000e+00 - val_loss: 0.0060 - val_accuracy: 0.0037 - lr: 0.0010 - 228ms/epoch - 5ms/step
Epoch 8/500

Epoch 00008: val_loss improved from 0.00595 to 0.00564, saving model to LSTM6.h5
45/45 - 0s - loss: 0.0035 - accuracy: 0.0000e+00 - val_loss: 0.0056 - val_accuracy: 0.0037 - lr: 0.0010 - 231ms/epoch - 5ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.00564
45/45 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0064 - val_accuracy: 0.0037 - lr: 0.0010 - 218ms/epoch - 5ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.00564
45/45 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0071 - val_accuracy: 0.0037 - lr: 0.0010 - 215ms/epoch - 5ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.00564
45/45 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0089 - val_accuracy: 0.0037 - lr: 0.0010 - 219ms/epoch - 5ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.00564
45/45 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0098 - val_accuracy: 0.0037 - lr: 0.0010 - 230ms/epoch - 5ms/step
Epoch 13/500

Epoch 00013: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00013: val_loss did not improve from 0.00564
45/45 - 0s - loss: 0.0025 - accuracy: 0.0000e+00 - val_loss: 0.0134 - val_accuracy: 0.0037 - lr: 0.0010 - 217ms/epoch - 5ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.00564
45/45 - 0s - loss: 0.0050 - accuracy: 0.0000e+00 - val_loss: 0.0070 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 211ms/epoch - 5ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.00564
45/45 - 0s - loss: 0.0016 - accuracy: 0.0000e+00 - val_loss: 0.0082 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 203ms/epoch - 5ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.00564
45/45 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0090 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 212ms/epoch - 5ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.00564
45/45 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0098 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 221ms/epoch - 5ms/step
Epoch 18/500

Epoch 00018: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00018: val_loss did not improve from 0.00564
45/45 - 0s - loss: 9.2074e-04 - accuracy: 0.0000e+00 - val_loss: 0.0105 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 206ms/epoch - 5ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.00564
45/45 - 0s - loss: 8.5252e-04 - accuracy: 0.0000e+00 - val_loss: 0.0106 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 209ms/epoch - 5ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.00564
45/45 - 0s - loss: 8.4270e-04 - accuracy: 0.0000e+00 - val_loss: 0.0108 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 208ms/epoch - 5ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.00564
45/45 - 0s - loss: 8.3744e-04 - accuracy: 0.0000e+00 - val_loss: 0.0109 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 216ms/epoch - 5ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.00564
45/45 - 0s - loss: 8.3412e-04 - accuracy: 0.0000e+00 - val_loss: 0.0109 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 216ms/epoch - 5ms/step
Epoch 23/500

Epoch 00023: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00023: val_loss did not improve from 0.00564
45/45 - 0s - loss: 8.3146e-04 - accuracy: 0.0000e+00 - val_loss: 0.0110 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 204ms/epoch - 5ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.00564
45/45 - 0s - loss: 8.2904e-04 - accuracy: 0.0000e+00 - val_loss: 0.0110 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 210ms/epoch - 5ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.00564
45/45 - 0s - loss: 8.2675e-04 - accuracy: 0.0000e+00 - val_loss: 0.0111 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 209ms/epoch - 5ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.00564
45/45 - 0s - loss: 8.2453e-04 - accuracy: 0.0000e+00 - val_loss: 0.0111 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 234ms/epoch - 5ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.00564
45/45 - 0s - loss: 8.2238e-04 - accuracy: 0.0000e+00 - val_loss: 0.0112 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 210ms/epoch - 5ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.00564
45/45 - 0s - loss: 8.2029e-04 - accuracy: 0.0000e+00 - val_loss: 0.0112 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 203ms/epoch - 5ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.00564
45/45 - 0s - loss: 8.1825e-04 - accuracy: 0.0000e+00 - val_loss: 0.0113 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 207ms/epoch - 5ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.00564
45/45 - 0s - loss: 8.1626e-04 - accuracy: 0.0000e+00 - val_loss: 0.0113 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 224ms/epoch - 5ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.00564
45/45 - 0s - loss: 8.1433e-04 - accuracy: 0.0000e+00 - val_loss: 0.0114 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 242ms/epoch - 5ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.00564
45/45 - 0s - loss: 8.1244e-04 - accuracy: 0.0000e+00 - val_loss: 0.0114 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 225ms/epoch - 5ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.00564
45/45 - 0s - loss: 8.1061e-04 - accuracy: 0.0000e+00 - val_loss: 0.0115 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 213ms/epoch - 5ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.00564
45/45 - 0s - loss: 8.0883e-04 - accuracy: 0.0000e+00 - val_loss: 0.0115 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 215ms/epoch - 5ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.00564
45/45 - 0s - loss: 8.0709e-04 - accuracy: 0.0000e+00 - val_loss: 0.0115 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 219ms/epoch - 5ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.00564
45/45 - 0s - loss: 8.0540e-04 - accuracy: 0.0000e+00 - val_loss: 0.0116 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 222ms/epoch - 5ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.00564
45/45 - 0s - loss: 8.0376e-04 - accuracy: 0.0000e+00 - val_loss: 0.0116 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 210ms/epoch - 5ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.00564
45/45 - 0s - loss: 8.0216e-04 - accuracy: 0.0000e+00 - val_loss: 0.0117 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 210ms/epoch - 5ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.00564
45/45 - 0s - loss: 8.0060e-04 - accuracy: 0.0000e+00 - val_loss: 0.0117 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 217ms/epoch - 5ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.00564
45/45 - 0s - loss: 7.9908e-04 - accuracy: 0.0000e+00 - val_loss: 0.0117 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 237ms/epoch - 5ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.00564
45/45 - 0s - loss: 7.9760e-04 - accuracy: 0.0000e+00 - val_loss: 0.0118 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 230ms/epoch - 5ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.00564
45/45 - 0s - loss: 7.9615e-04 - accuracy: 0.0000e+00 - val_loss: 0.0118 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 210ms/epoch - 5ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.00564
45/45 - 0s - loss: 7.9473e-04 - accuracy: 0.0000e+00 - val_loss: 0.0119 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 219ms/epoch - 5ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.00564
45/45 - 0s - loss: 7.9334e-04 - accuracy: 0.0000e+00 - val_loss: 0.0119 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 218ms/epoch - 5ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.00564
45/45 - 0s - loss: 7.9198e-04 - accuracy: 0.0000e+00 - val_loss: 0.0119 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 220ms/epoch - 5ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.00564
45/45 - 0s - loss: 7.9065e-04 - accuracy: 0.0000e+00 - val_loss: 0.0119 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 217ms/epoch - 5ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.00564
45/45 - 0s - loss: 7.8934e-04 - accuracy: 0.0000e+00 - val_loss: 0.0120 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 221ms/epoch - 5ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.00564
45/45 - 0s - loss: 7.8805e-04 - accuracy: 0.0000e+00 - val_loss: 0.0120 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 213ms/epoch - 5ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.00564
45/45 - 0s - loss: 7.8678e-04 - accuracy: 0.0000e+00 - val_loss: 0.0120 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 235ms/epoch - 5ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.00564
45/45 - 0s - loss: 7.8553e-04 - accuracy: 0.0000e+00 - val_loss: 0.0120 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 205ms/epoch - 5ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.00564
45/45 - 0s - loss: 7.8430e-04 - accuracy: 0.0000e+00 - val_loss: 0.0121 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 213ms/epoch - 5ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.00564
45/45 - 0s - loss: 7.8308e-04 - accuracy: 0.0000e+00 - val_loss: 0.0121 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 216ms/epoch - 5ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.00564
45/45 - 0s - loss: 7.8187e-04 - accuracy: 0.0000e+00 - val_loss: 0.0121 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 221ms/epoch - 5ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.00564
45/45 - 0s - loss: 7.8068e-04 - accuracy: 0.0000e+00 - val_loss: 0.0121 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 220ms/epoch - 5ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.00564
45/45 - 0s - loss: 7.7949e-04 - accuracy: 0.0000e+00 - val_loss: 0.0121 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 202ms/epoch - 4ms/step
Epoch 56/500

Epoch 00056: val_loss did not improve from 0.00564
45/45 - 0s - loss: 7.7832e-04 - accuracy: 0.0000e+00 - val_loss: 0.0121 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 216ms/epoch - 5ms/step
Epoch 57/500

Epoch 00057: val_loss did not improve from 0.00564
45/45 - 0s - loss: 7.7716e-04 - accuracy: 0.0000e+00 - val_loss: 0.0122 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 206ms/epoch - 5ms/step
Epoch 58/500

Epoch 00058: val_loss did not improve from 0.00564
45/45 - 0s - loss: 7.7600e-04 - accuracy: 0.0000e+00 - val_loss: 0.0122 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 226ms/epoch - 5ms/step
Epoch 00058: early stopping
SMA
Prediction vs Close:		53.73% Accuracy
Prediction vs Prediction:	47.01% Accuracy
MSE:	 71.66891031373025 
RMSE:	 8.465749247038342 
MAPE:	 6.880610177712922

EMA
Prediction vs Close:		54.85% Accuracy
Prediction vs Prediction:	43.66% Accuracy
MSE:	 67.24711347278334 
RMSE:	 8.200433736869249 
MAPE:	 6.781803215137433

WMA
Prediction vs Close:		55.6% Accuracy
Prediction vs Prediction:	47.76% Accuracy
MSE:	 58.550384022767716 
RMSE:	 7.6518222681115455 
MAPE:	 6.1413991074844

DEMA
Prediction vs Close:		52.24% Accuracy
Prediction vs Prediction:	47.01% Accuracy
MSE:	 127.72881036471918 
RMSE:	 11.301717142307146 
MAPE:	 10.306940424406019

KAMA
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	48.51% Accuracy
MSE:	 43.855468679122 
RMSE:	 6.622346161227303 
MAPE:	 5.4751276749367985
MIDPOINT
MIDPOINT([input_arrays], [timeperiod=14])

MidPoint over period (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 14
Outputs:
    real
14

Working on MIDPOINT predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-17005.753, Time=2.12 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-14574.592, Time=3.89 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-16288.639, Time=10.65 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-14572.592, Time=5.03 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=-15275.254, Time=7.35 sec
 ARIMA(1,3,2)(0,0,0)[0]             : AIC=-15486.751, Time=12.13 sec
 ARIMA(0,3,2)(0,0,0)[0]             : AIC=48.000, Time=0.45 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=-17008.491, Time=2.27 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=-17004.554, Time=2.87 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=-17087.445, Time=5.71 sec
 ARIMA(3,3,2)(0,0,0)[0]             : AIC=-15686.421, Time=9.73 sec
 ARIMA(2,3,2)(0,0,0)[0]             : AIC=-17030.168, Time=14.06 sec
 ARIMA(3,3,1)(0,0,0)[0] intercept   : AIC=-15138.715, Time=13.34 sec

Best model:  ARIMA(3,3,1)(0,0,0)[0]          
Total fit time: 89.606 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 1)   Log Likelihood                8569.722
Date:                Sun, 12 Dec 2021   AIC                         -17087.445
Time:                        17:01:17   BIC                         -16965.483
Sample:                             0   HQIC                        -17040.607
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1          -2.14e-10   1.09e-20  -1.96e+10      0.000   -2.14e-10   -2.14e-10
x2         -2.126e-10   1.13e-20  -1.88e+10      0.000   -2.13e-10   -2.13e-10
x3         -2.175e-10   1.06e-20  -2.04e+10      0.000   -2.17e-10   -2.17e-10
x4             1.0000    1.1e-20   9.11e+19      0.000       1.000       1.000
x5         -1.941e-10   1.05e-20  -1.86e+10      0.000   -1.94e-10   -1.94e-10
x6         -4.131e-09   7.64e-20   -5.4e+10      0.000   -4.13e-09   -4.13e-09
x7         -1.965e-10   1.05e-20  -1.86e+10      0.000   -1.96e-10   -1.96e-10
x8         -1.961e-10   1.07e-20  -1.84e+10      0.000   -1.96e-10   -1.96e-10
x9         -1.005e-10   9.12e-22   -1.1e+11      0.000      -1e-10      -1e-10
x10        -1.739e-10   3.37e-21  -5.16e+10      0.000   -1.74e-10   -1.74e-10
x11        -1.941e-10   1.07e-20  -1.82e+10      0.000   -1.94e-10   -1.94e-10
x12        -2.005e-10   1.06e-20  -1.89e+10      0.000      -2e-10      -2e-10
x13        -2.056e-10   1.07e-20  -1.91e+10      0.000   -2.06e-10   -2.06e-10
x14        -1.687e-09   3.15e-20  -5.36e+10      0.000   -1.69e-09   -1.69e-09
x15        -2.365e-10   1.17e-20  -2.01e+10      0.000   -2.36e-10   -2.36e-10
x16        -1.523e-10   9.42e-21  -1.62e+10      0.000   -1.52e-10   -1.52e-10
x17        -1.491e-10   9.33e-21   -1.6e+10      0.000   -1.49e-10   -1.49e-10
x18        -6.404e-10   1.93e-20  -3.32e+10      0.000    -6.4e-10    -6.4e-10
x19        -2.596e-10   1.23e-20  -2.11e+10      0.000    -2.6e-10    -2.6e-10
x20        -6.246e-10   1.91e-20  -3.28e+10      0.000   -6.25e-10   -6.25e-10
x21        -1.953e-09   2.16e-20  -9.04e+10      0.000   -1.95e-09   -1.95e-09
ar.L1         -0.4914   1.46e-22  -3.38e+21      0.000      -0.491      -0.491
ar.L2         -0.1934   8.48e-23  -2.28e+21      0.000      -0.193      -0.193
ar.L3         -0.0491    4.2e-23  -1.17e+21      0.000      -0.049      -0.049
ma.L1         -0.7092   3.33e-22  -2.13e+21      0.000      -0.709      -0.709
sigma2       8.99e-11   6.95e-11      1.293      0.196   -4.64e-11    2.26e-10
===================================================================================
Ljung-Box (L1) (Q):                  32.51   Jarque-Bera (JB):             49038.38
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.35   Skew:                             1.06
Prob(H) (two-sided):                  0.00   Kurtosis:                        41.18
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 1.71e+40. Standard errors may be unstable.
ARIMA order: (3, 3, 1) 

/usr/local/lib/python3.7/dist-packages/keras/optimizer_v2/adam.py:105: UserWarning: The `lr` argument is deprecated, use `learning_rate` instead.
  super(Adam, self).__init__(name, **kwargs)
Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.00811, saving model to LSTM6.h5
58/58 - 4s - loss: 0.1472 - accuracy: 0.0000e+00 - val_loss: 0.0081 - val_accuracy: 0.0037 - lr: 0.0010 - 4s/epoch - 63ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.00811
58/58 - 0s - loss: 0.0220 - accuracy: 0.0000e+00 - val_loss: 0.0081 - val_accuracy: 0.0037 - lr: 0.0010 - 293ms/epoch - 5ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.00811
58/58 - 0s - loss: 0.0168 - accuracy: 0.0000e+00 - val_loss: 0.0433 - val_accuracy: 0.0037 - lr: 0.0010 - 274ms/epoch - 5ms/step
Epoch 4/500

Epoch 00004: val_loss improved from 0.00811 to 0.00638, saving model to LSTM6.h5
58/58 - 0s - loss: 0.0052 - accuracy: 0.0000e+00 - val_loss: 0.0064 - val_accuracy: 0.0037 - lr: 0.0010 - 299ms/epoch - 5ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.00638
58/58 - 0s - loss: 0.0028 - accuracy: 0.0000e+00 - val_loss: 0.0088 - val_accuracy: 0.0037 - lr: 0.0010 - 273ms/epoch - 5ms/step
Epoch 6/500

Epoch 00006: val_loss did not improve from 0.00638
58/58 - 0s - loss: 0.0062 - accuracy: 0.0000e+00 - val_loss: 0.0069 - val_accuracy: 0.0037 - lr: 0.0010 - 283ms/epoch - 5ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.00638
58/58 - 0s - loss: 0.0116 - accuracy: 0.0000e+00 - val_loss: 0.0070 - val_accuracy: 0.0037 - lr: 0.0010 - 271ms/epoch - 5ms/step
Epoch 8/500

Epoch 00008: val_loss improved from 0.00638 to 0.00531, saving model to LSTM6.h5
58/58 - 0s - loss: 0.0209 - accuracy: 0.0000e+00 - val_loss: 0.0053 - val_accuracy: 0.0037 - lr: 0.0010 - 281ms/epoch - 5ms/step
Epoch 9/500

Epoch 00009: val_loss improved from 0.00531 to 0.00390, saving model to LSTM6.h5
58/58 - 0s - loss: 0.0246 - accuracy: 0.0000e+00 - val_loss: 0.0039 - val_accuracy: 0.0037 - lr: 0.0010 - 299ms/epoch - 5ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.00390
58/58 - 0s - loss: 0.0153 - accuracy: 0.0000e+00 - val_loss: 0.0056 - val_accuracy: 0.0037 - lr: 0.0010 - 275ms/epoch - 5ms/step
Epoch 11/500

Epoch 00011: val_loss improved from 0.00390 to 0.00352, saving model to LSTM6.h5
58/58 - 0s - loss: 0.0050 - accuracy: 0.0000e+00 - val_loss: 0.0035 - val_accuracy: 0.0037 - lr: 0.0010 - 302ms/epoch - 5ms/step
Epoch 12/500

Epoch 00012: val_loss improved from 0.00352 to 0.00301, saving model to LSTM6.h5
58/58 - 0s - loss: 0.0028 - accuracy: 0.0000e+00 - val_loss: 0.0030 - val_accuracy: 0.0037 - lr: 0.0010 - 299ms/epoch - 5ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.00301
58/58 - 0s - loss: 0.0025 - accuracy: 0.0000e+00 - val_loss: 0.0030 - val_accuracy: 0.0037 - lr: 0.0010 - 272ms/epoch - 5ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.00301
58/58 - 0s - loss: 0.0030 - accuracy: 0.0000e+00 - val_loss: 0.0033 - val_accuracy: 0.0037 - lr: 0.0010 - 270ms/epoch - 5ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.00301
58/58 - 0s - loss: 0.0041 - accuracy: 0.0000e+00 - val_loss: 0.0040 - val_accuracy: 0.0037 - lr: 0.0010 - 265ms/epoch - 5ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.00301
58/58 - 0s - loss: 0.0068 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 0.0010 - 268ms/epoch - 5ms/step
Epoch 17/500

Epoch 00017: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00017: val_loss did not improve from 0.00301
58/58 - 0s - loss: 0.0117 - accuracy: 0.0000e+00 - val_loss: 0.0053 - val_accuracy: 0.0037 - lr: 0.0010 - 276ms/epoch - 5ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.00301
58/58 - 0s - loss: 0.0179 - accuracy: 0.0000e+00 - val_loss: 0.0140 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 290ms/epoch - 5ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.00301
58/58 - 0s - loss: 0.0028 - accuracy: 0.0000e+00 - val_loss: 0.0090 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 297ms/epoch - 5ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.00301
58/58 - 0s - loss: 0.0020 - accuracy: 0.0000e+00 - val_loss: 0.0075 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 262ms/epoch - 5ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.00301
58/58 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0061 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 294ms/epoch - 5ms/step
Epoch 22/500

Epoch 00022: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00022: val_loss did not improve from 0.00301
58/58 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0050 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 292ms/epoch - 5ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.00301
58/58 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0050 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 293ms/epoch - 5ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.00301
58/58 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0049 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 270ms/epoch - 5ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.00301
58/58 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0049 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 282ms/epoch - 5ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.00301
58/58 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0048 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 276ms/epoch - 5ms/step
Epoch 27/500

Epoch 00027: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00027: val_loss did not improve from 0.00301
58/58 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 274ms/epoch - 5ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.00301
58/58 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0046 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 272ms/epoch - 5ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.00301
58/58 - 0s - loss: 9.9450e-04 - accuracy: 0.0000e+00 - val_loss: 0.0046 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 288ms/epoch - 5ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.00301
58/58 - 0s - loss: 9.8684e-04 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 285ms/epoch - 5ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.00301
58/58 - 0s - loss: 9.7942e-04 - accuracy: 0.0000e+00 - val_loss: 0.0044 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 272ms/epoch - 5ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.00301
58/58 - 0s - loss: 9.7223e-04 - accuracy: 0.0000e+00 - val_loss: 0.0043 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 291ms/epoch - 5ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.00301
58/58 - 0s - loss: 9.6527e-04 - accuracy: 0.0000e+00 - val_loss: 0.0042 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 275ms/epoch - 5ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.00301
58/58 - 0s - loss: 9.5853e-04 - accuracy: 0.0000e+00 - val_loss: 0.0041 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 279ms/epoch - 5ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.00301
58/58 - 0s - loss: 9.5201e-04 - accuracy: 0.0000e+00 - val_loss: 0.0041 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 271ms/epoch - 5ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.00301
58/58 - 0s - loss: 9.4571e-04 - accuracy: 0.0000e+00 - val_loss: 0.0040 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 264ms/epoch - 5ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.00301
58/58 - 0s - loss: 9.3964e-04 - accuracy: 0.0000e+00 - val_loss: 0.0039 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 295ms/epoch - 5ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.00301
58/58 - 0s - loss: 9.3379e-04 - accuracy: 0.0000e+00 - val_loss: 0.0038 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 278ms/epoch - 5ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.00301
58/58 - 0s - loss: 9.2816e-04 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 277ms/epoch - 5ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.00301
58/58 - 0s - loss: 9.2275e-04 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 264ms/epoch - 5ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.00301
58/58 - 0s - loss: 9.1755e-04 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 278ms/epoch - 5ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.00301
58/58 - 0s - loss: 9.1257e-04 - accuracy: 0.0000e+00 - val_loss: 0.0035 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 269ms/epoch - 5ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.00301
58/58 - 0s - loss: 9.0779e-04 - accuracy: 0.0000e+00 - val_loss: 0.0035 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 271ms/epoch - 5ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.00301
58/58 - 0s - loss: 9.0322e-04 - accuracy: 0.0000e+00 - val_loss: 0.0034 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 283ms/epoch - 5ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.00301
58/58 - 0s - loss: 8.9885e-04 - accuracy: 0.0000e+00 - val_loss: 0.0033 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 268ms/epoch - 5ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.00301
58/58 - 0s - loss: 8.9466e-04 - accuracy: 0.0000e+00 - val_loss: 0.0033 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 276ms/epoch - 5ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.00301
58/58 - 0s - loss: 8.9066e-04 - accuracy: 0.0000e+00 - val_loss: 0.0032 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 283ms/epoch - 5ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.00301
58/58 - 0s - loss: 8.8683e-04 - accuracy: 0.0000e+00 - val_loss: 0.0032 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 270ms/epoch - 5ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.00301
58/58 - 0s - loss: 8.8318e-04 - accuracy: 0.0000e+00 - val_loss: 0.0031 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 272ms/epoch - 5ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.00301
58/58 - 0s - loss: 8.7968e-04 - accuracy: 0.0000e+00 - val_loss: 0.0030 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 277ms/epoch - 5ms/step
Epoch 51/500

Epoch 00051: val_loss improved from 0.00301 to 0.00300, saving model to LSTM6.h5
58/58 - 1s - loss: 8.7634e-04 - accuracy: 0.0000e+00 - val_loss: 0.0030 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 509ms/epoch - 9ms/step
Epoch 52/500

Epoch 00052: val_loss improved from 0.00300 to 0.00295, saving model to LSTM6.h5
58/58 - 0s - loss: 8.7314e-04 - accuracy: 0.0000e+00 - val_loss: 0.0030 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 286ms/epoch - 5ms/step
Epoch 53/500

Epoch 00053: val_loss improved from 0.00295 to 0.00291, saving model to LSTM6.h5
58/58 - 0s - loss: 8.7008e-04 - accuracy: 0.0000e+00 - val_loss: 0.0029 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 281ms/epoch - 5ms/step
Epoch 54/500

Epoch 00054: val_loss improved from 0.00291 to 0.00287, saving model to LSTM6.h5
58/58 - 0s - loss: 8.6714e-04 - accuracy: 0.0000e+00 - val_loss: 0.0029 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 279ms/epoch - 5ms/step
Epoch 55/500

Epoch 00055: val_loss improved from 0.00287 to 0.00283, saving model to LSTM6.h5
58/58 - 0s - loss: 8.6433e-04 - accuracy: 0.0000e+00 - val_loss: 0.0028 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 316ms/epoch - 5ms/step
Epoch 56/500

Epoch 00056: val_loss improved from 0.00283 to 0.00279, saving model to LSTM6.h5
58/58 - 0s - loss: 8.6163e-04 - accuracy: 0.0000e+00 - val_loss: 0.0028 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 286ms/epoch - 5ms/step
Epoch 57/500

Epoch 00057: val_loss improved from 0.00279 to 0.00275, saving model to LSTM6.h5
58/58 - 0s - loss: 8.5903e-04 - accuracy: 0.0000e+00 - val_loss: 0.0028 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 278ms/epoch - 5ms/step
Epoch 58/500

Epoch 00058: val_loss improved from 0.00275 to 0.00272, saving model to LSTM6.h5
58/58 - 0s - loss: 8.5653e-04 - accuracy: 0.0000e+00 - val_loss: 0.0027 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 283ms/epoch - 5ms/step
Epoch 59/500

Epoch 00059: val_loss improved from 0.00272 to 0.00269, saving model to LSTM6.h5
58/58 - 0s - loss: 8.5413e-04 - accuracy: 0.0000e+00 - val_loss: 0.0027 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 284ms/epoch - 5ms/step
Epoch 60/500

Epoch 00060: val_loss improved from 0.00269 to 0.00266, saving model to LSTM6.h5
58/58 - 0s - loss: 8.5180e-04 - accuracy: 0.0000e+00 - val_loss: 0.0027 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 292ms/epoch - 5ms/step
Epoch 61/500

Epoch 00061: val_loss improved from 0.00266 to 0.00264, saving model to LSTM6.h5
58/58 - 0s - loss: 8.4955e-04 - accuracy: 0.0000e+00 - val_loss: 0.0026 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 282ms/epoch - 5ms/step
Epoch 62/500

Epoch 00062: val_loss improved from 0.00264 to 0.00262, saving model to LSTM6.h5
58/58 - 0s - loss: 8.4737e-04 - accuracy: 0.0000e+00 - val_loss: 0.0026 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 292ms/epoch - 5ms/step
Epoch 63/500

Epoch 00063: val_loss improved from 0.00262 to 0.00259, saving model to LSTM6.h5
58/58 - 0s - loss: 8.4525e-04 - accuracy: 0.0000e+00 - val_loss: 0.0026 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 279ms/epoch - 5ms/step
Epoch 64/500

Epoch 00064: val_loss improved from 0.00259 to 0.00258, saving model to LSTM6.h5
58/58 - 0s - loss: 8.4319e-04 - accuracy: 0.0000e+00 - val_loss: 0.0026 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 285ms/epoch - 5ms/step
Epoch 65/500

Epoch 00065: val_loss improved from 0.00258 to 0.00256, saving model to LSTM6.h5
58/58 - 0s - loss: 8.4117e-04 - accuracy: 0.0000e+00 - val_loss: 0.0026 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 288ms/epoch - 5ms/step
Epoch 66/500

Epoch 00066: val_loss improved from 0.00256 to 0.00255, saving model to LSTM6.h5
58/58 - 0s - loss: 8.3921e-04 - accuracy: 0.0000e+00 - val_loss: 0.0025 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 276ms/epoch - 5ms/step
Epoch 67/500

Epoch 00067: val_loss improved from 0.00255 to 0.00253, saving model to LSTM6.h5
58/58 - 0s - loss: 8.3728e-04 - accuracy: 0.0000e+00 - val_loss: 0.0025 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 273ms/epoch - 5ms/step
Epoch 68/500

Epoch 00068: val_loss improved from 0.00253 to 0.00252, saving model to LSTM6.h5
58/58 - 0s - loss: 8.3539e-04 - accuracy: 0.0000e+00 - val_loss: 0.0025 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 277ms/epoch - 5ms/step
Epoch 69/500

Epoch 00069: val_loss improved from 0.00252 to 0.00251, saving model to LSTM6.h5
58/58 - 0s - loss: 8.3352e-04 - accuracy: 0.0000e+00 - val_loss: 0.0025 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 289ms/epoch - 5ms/step
Epoch 70/500

Epoch 00070: val_loss improved from 0.00251 to 0.00251, saving model to LSTM6.h5
58/58 - 0s - loss: 8.3168e-04 - accuracy: 0.0000e+00 - val_loss: 0.0025 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 274ms/epoch - 5ms/step
Epoch 71/500

Epoch 00071: val_loss improved from 0.00251 to 0.00250, saving model to LSTM6.h5
58/58 - 0s - loss: 8.2987e-04 - accuracy: 0.0000e+00 - val_loss: 0.0025 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 273ms/epoch - 5ms/step
Epoch 72/500

Epoch 00072: val_loss improved from 0.00250 to 0.00250, saving model to LSTM6.h5
58/58 - 0s - loss: 8.2807e-04 - accuracy: 0.0000e+00 - val_loss: 0.0025 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 284ms/epoch - 5ms/step
Epoch 73/500

Epoch 00073: val_loss improved from 0.00250 to 0.00250, saving model to LSTM6.h5
58/58 - 0s - loss: 8.2628e-04 - accuracy: 0.0000e+00 - val_loss: 0.0025 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 282ms/epoch - 5ms/step
Epoch 74/500

Epoch 00074: val_loss did not improve from 0.00250
58/58 - 0s - loss: 8.2451e-04 - accuracy: 0.0000e+00 - val_loss: 0.0025 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 271ms/epoch - 5ms/step
Epoch 75/500

Epoch 00075: val_loss did not improve from 0.00250
58/58 - 0s - loss: 8.2274e-04 - accuracy: 0.0000e+00 - val_loss: 0.0025 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 275ms/epoch - 5ms/step
Epoch 76/500

Epoch 00076: val_loss did not improve from 0.00250
58/58 - 0s - loss: 8.2098e-04 - accuracy: 0.0000e+00 - val_loss: 0.0025 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 276ms/epoch - 5ms/step
Epoch 77/500

Epoch 00077: val_loss did not improve from 0.00250
58/58 - 0s - loss: 8.1922e-04 - accuracy: 0.0000e+00 - val_loss: 0.0025 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 272ms/epoch - 5ms/step
Epoch 78/500

Epoch 00078: val_loss did not improve from 0.00250
58/58 - 0s - loss: 8.1746e-04 - accuracy: 0.0000e+00 - val_loss: 0.0025 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 254ms/epoch - 4ms/step
Epoch 79/500

Epoch 00079: val_loss did not improve from 0.00250
58/58 - 0s - loss: 8.1570e-04 - accuracy: 0.0000e+00 - val_loss: 0.0025 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 260ms/epoch - 4ms/step
Epoch 80/500

Epoch 00080: val_loss did not improve from 0.00250
58/58 - 0s - loss: 8.1394e-04 - accuracy: 0.0000e+00 - val_loss: 0.0025 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 274ms/epoch - 5ms/step
Epoch 81/500

Epoch 00081: val_loss did not improve from 0.00250
58/58 - 0s - loss: 8.1217e-04 - accuracy: 0.0000e+00 - val_loss: 0.0025 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 281ms/epoch - 5ms/step
Epoch 82/500

Epoch 00082: val_loss did not improve from 0.00250
58/58 - 0s - loss: 8.1039e-04 - accuracy: 0.0000e+00 - val_loss: 0.0025 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 261ms/epoch - 4ms/step
Epoch 83/500

Epoch 00083: val_loss did not improve from 0.00250
58/58 - 0s - loss: 8.0861e-04 - accuracy: 0.0000e+00 - val_loss: 0.0026 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 285ms/epoch - 5ms/step
Epoch 84/500

Epoch 00084: val_loss did not improve from 0.00250
58/58 - 0s - loss: 8.0682e-04 - accuracy: 0.0000e+00 - val_loss: 0.0026 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 267ms/epoch - 5ms/step
Epoch 85/500

Epoch 00085: val_loss did not improve from 0.00250
58/58 - 0s - loss: 8.0502e-04 - accuracy: 0.0000e+00 - val_loss: 0.0026 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 293ms/epoch - 5ms/step
Epoch 86/500

Epoch 00086: val_loss did not improve from 0.00250
58/58 - 0s - loss: 8.0320e-04 - accuracy: 0.0000e+00 - val_loss: 0.0026 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 265ms/epoch - 5ms/step
Epoch 87/500

Epoch 00087: val_loss did not improve from 0.00250
58/58 - 0s - loss: 8.0138e-04 - accuracy: 0.0000e+00 - val_loss: 0.0026 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 282ms/epoch - 5ms/step
Epoch 88/500

Epoch 00088: val_loss did not improve from 0.00250
58/58 - 0s - loss: 7.9955e-04 - accuracy: 0.0000e+00 - val_loss: 0.0026 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 263ms/epoch - 5ms/step
Epoch 89/500

Epoch 00089: val_loss did not improve from 0.00250
58/58 - 0s - loss: 7.9770e-04 - accuracy: 0.0000e+00 - val_loss: 0.0027 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 267ms/epoch - 5ms/step
Epoch 90/500

Epoch 00090: val_loss did not improve from 0.00250
58/58 - 0s - loss: 7.9585e-04 - accuracy: 0.0000e+00 - val_loss: 0.0027 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 265ms/epoch - 5ms/step
Epoch 91/500

Epoch 00091: val_loss did not improve from 0.00250
58/58 - 0s - loss: 7.9398e-04 - accuracy: 0.0000e+00 - val_loss: 0.0027 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 262ms/epoch - 5ms/step
Epoch 92/500

Epoch 00092: val_loss did not improve from 0.00250
58/58 - 0s - loss: 7.9209e-04 - accuracy: 0.0000e+00 - val_loss: 0.0027 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 260ms/epoch - 4ms/step
Epoch 93/500

Epoch 00093: val_loss did not improve from 0.00250
58/58 - 0s - loss: 7.9020e-04 - accuracy: 0.0000e+00 - val_loss: 0.0028 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 280ms/epoch - 5ms/step
Epoch 94/500

Epoch 00094: val_loss did not improve from 0.00250
58/58 - 0s - loss: 7.8829e-04 - accuracy: 0.0000e+00 - val_loss: 0.0028 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 265ms/epoch - 5ms/step
Epoch 95/500

Epoch 00095: val_loss did not improve from 0.00250
58/58 - 0s - loss: 7.8637e-04 - accuracy: 0.0000e+00 - val_loss: 0.0028 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 268ms/epoch - 5ms/step
Epoch 96/500

Epoch 00096: val_loss did not improve from 0.00250
58/58 - 0s - loss: 7.8444e-04 - accuracy: 0.0000e+00 - val_loss: 0.0028 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 255ms/epoch - 4ms/step
Epoch 97/500

Epoch 00097: val_loss did not improve from 0.00250
58/58 - 0s - loss: 7.8250e-04 - accuracy: 0.0000e+00 - val_loss: 0.0029 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 273ms/epoch - 5ms/step
Epoch 98/500

Epoch 00098: val_loss did not improve from 0.00250
58/58 - 0s - loss: 7.8055e-04 - accuracy: 0.0000e+00 - val_loss: 0.0029 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 270ms/epoch - 5ms/step
Epoch 99/500

Epoch 00099: val_loss did not improve from 0.00250
58/58 - 0s - loss: 7.7858e-04 - accuracy: 0.0000e+00 - val_loss: 0.0029 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 267ms/epoch - 5ms/step
Epoch 100/500

Epoch 00100: val_loss did not improve from 0.00250
58/58 - 0s - loss: 7.7660e-04 - accuracy: 0.0000e+00 - val_loss: 0.0029 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 265ms/epoch - 5ms/step
Epoch 101/500

Epoch 00101: val_loss did not improve from 0.00250
58/58 - 0s - loss: 7.7462e-04 - accuracy: 0.0000e+00 - val_loss: 0.0030 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 265ms/epoch - 5ms/step
Epoch 102/500

Epoch 00102: val_loss did not improve from 0.00250
58/58 - 0s - loss: 7.7262e-04 - accuracy: 0.0000e+00 - val_loss: 0.0030 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 271ms/epoch - 5ms/step
Epoch 103/500

Epoch 00103: val_loss did not improve from 0.00250
58/58 - 0s - loss: 7.7061e-04 - accuracy: 0.0000e+00 - val_loss: 0.0030 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 262ms/epoch - 5ms/step
Epoch 104/500

Epoch 00104: val_loss did not improve from 0.00250
58/58 - 0s - loss: 7.6859e-04 - accuracy: 0.0000e+00 - val_loss: 0.0031 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 261ms/epoch - 4ms/step
Epoch 105/500

Epoch 00105: val_loss did not improve from 0.00250
58/58 - 0s - loss: 7.6657e-04 - accuracy: 0.0000e+00 - val_loss: 0.0031 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 281ms/epoch - 5ms/step
Epoch 106/500

Epoch 00106: val_loss did not improve from 0.00250
58/58 - 0s - loss: 7.6454e-04 - accuracy: 0.0000e+00 - val_loss: 0.0031 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 274ms/epoch - 5ms/step
Epoch 107/500

Epoch 00107: val_loss did not improve from 0.00250
58/58 - 0s - loss: 7.6249e-04 - accuracy: 0.0000e+00 - val_loss: 0.0032 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 281ms/epoch - 5ms/step
Epoch 108/500

Epoch 00108: val_loss did not improve from 0.00250
58/58 - 0s - loss: 7.6045e-04 - accuracy: 0.0000e+00 - val_loss: 0.0032 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 262ms/epoch - 5ms/step
Epoch 109/500

Epoch 00109: val_loss did not improve from 0.00250
58/58 - 0s - loss: 7.5839e-04 - accuracy: 0.0000e+00 - val_loss: 0.0032 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 270ms/epoch - 5ms/step
Epoch 110/500

Epoch 00110: val_loss did not improve from 0.00250
58/58 - 0s - loss: 7.5633e-04 - accuracy: 0.0000e+00 - val_loss: 0.0033 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 263ms/epoch - 5ms/step
Epoch 111/500

Epoch 00111: val_loss did not improve from 0.00250
58/58 - 0s - loss: 7.5427e-04 - accuracy: 0.0000e+00 - val_loss: 0.0033 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 261ms/epoch - 5ms/step
Epoch 112/500

Epoch 00112: val_loss did not improve from 0.00250
58/58 - 0s - loss: 7.5220e-04 - accuracy: 0.0000e+00 - val_loss: 0.0033 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 262ms/epoch - 5ms/step
Epoch 113/500

Epoch 00113: val_loss did not improve from 0.00250
58/58 - 0s - loss: 7.5012e-04 - accuracy: 0.0000e+00 - val_loss: 0.0034 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 271ms/epoch - 5ms/step
Epoch 114/500

Epoch 00114: val_loss did not improve from 0.00250
58/58 - 0s - loss: 7.4805e-04 - accuracy: 0.0000e+00 - val_loss: 0.0034 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 270ms/epoch - 5ms/step
Epoch 115/500

Epoch 00115: val_loss did not improve from 0.00250
58/58 - 0s - loss: 7.4597e-04 - accuracy: 0.0000e+00 - val_loss: 0.0034 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 263ms/epoch - 5ms/step
Epoch 116/500

Epoch 00116: val_loss did not improve from 0.00250
58/58 - 0s - loss: 7.4389e-04 - accuracy: 0.0000e+00 - val_loss: 0.0035 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 271ms/epoch - 5ms/step
Epoch 117/500

Epoch 00117: val_loss did not improve from 0.00250
58/58 - 0s - loss: 7.4181e-04 - accuracy: 0.0000e+00 - val_loss: 0.0035 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 275ms/epoch - 5ms/step
Epoch 118/500

Epoch 00118: val_loss did not improve from 0.00250
58/58 - 0s - loss: 7.3973e-04 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 265ms/epoch - 5ms/step
Epoch 119/500

Epoch 00119: val_loss did not improve from 0.00250
58/58 - 0s - loss: 7.3765e-04 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 265ms/epoch - 5ms/step
Epoch 120/500

Epoch 00120: val_loss did not improve from 0.00250
58/58 - 0s - loss: 7.3557e-04 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 259ms/epoch - 4ms/step
Epoch 121/500

Epoch 00121: val_loss did not improve from 0.00250
58/58 - 0s - loss: 7.3349e-04 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 281ms/epoch - 5ms/step
Epoch 122/500

Epoch 00122: val_loss did not improve from 0.00250
58/58 - 0s - loss: 7.3141e-04 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 278ms/epoch - 5ms/step
Epoch 123/500

Epoch 00123: val_loss did not improve from 0.00250
58/58 - 0s - loss: 7.2934e-04 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 257ms/epoch - 4ms/step
Epoch 00123: early stopping
SMA
Prediction vs Close:		53.73% Accuracy
Prediction vs Prediction:	47.01% Accuracy
MSE:	 71.66891031373025 
RMSE:	 8.465749247038342 
MAPE:	 6.880610177712922

EMA
Prediction vs Close:		54.85% Accuracy
Prediction vs Prediction:	43.66% Accuracy
MSE:	 67.24711347278334 
RMSE:	 8.200433736869249 
MAPE:	 6.781803215137433

WMA
Prediction vs Close:		55.6% Accuracy
Prediction vs Prediction:	47.76% Accuracy
MSE:	 58.550384022767716 
RMSE:	 7.6518222681115455 
MAPE:	 6.1413991074844

DEMA
Prediction vs Close:		52.24% Accuracy
Prediction vs Prediction:	47.01% Accuracy
MSE:	 127.72881036471918 
RMSE:	 11.301717142307146 
MAPE:	 10.306940424406019

KAMA
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	48.51% Accuracy
MSE:	 43.855468679122 
RMSE:	 6.622346161227303 
MAPE:	 5.4751276749367985

MIDPOINT
Prediction vs Close:		51.87% Accuracy
Prediction vs Prediction:	45.15% Accuracy
MSE:	 67.44416314351042 
RMSE:	 8.212439536673035 
MAPE:	 6.768235104271493
T3
T3([input_arrays], [timeperiod=5], [vfactor=0.7])

Triple Exponential Moving Average (T3) (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 5
    vfactor: 0.7
Outputs:
    real
19

Working on T3 predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-16569.270, Time=2.21 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-14511.291, Time=2.35 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-15408.738, Time=7.62 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-15165.005, Time=8.22 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=-15595.465, Time=6.91 sec
 ARIMA(1,3,2)(0,0,0)[0]             : AIC=-15837.470, Time=10.11 sec
 ARIMA(0,3,2)(0,0,0)[0]             : AIC=-15491.538, Time=9.25 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=-16378.438, Time=2.50 sec
/usr/local/lib/python3.7/dist-packages/statsmodels/tsa/statespace/sarimax.py:1906: RuntimeWarning: divide by zero encountered in reciprocal
  return np.roots(self.polynomial_reduced_ma)**-1
 ARIMA(2,3,2)(0,0,0)[0]             : AIC=-16318.604, Time=3.49 sec
 ARIMA(1,3,1)(0,0,0)[0] intercept   : AIC=-16567.270, Time=2.32 sec

Best model:  ARIMA(1,3,1)(0,0,0)[0]          
Total fit time: 54.991 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(1, 3, 1)   Log Likelihood                8308.635
Date:                Sun, 12 Dec 2021   AIC                         -16569.270
Time:                        17:10:07   BIC                         -16456.690
Sample:                             0   HQIC                        -16526.035
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1          1.355e-13   3.43e-05   3.95e-09      1.000   -6.72e-05    6.72e-05
x2          5.009e-14   2.67e-05   1.88e-09      1.000   -5.23e-05    5.23e-05
x3         -9.101e-15   2.19e-05  -4.16e-10      1.000   -4.29e-05    4.29e-05
x4             1.0000   2.97e-05   3.37e+04      0.000       1.000       1.000
x5          3.626e-12    3.2e-05   1.13e-07      1.000   -6.28e-05    6.28e-05
x6          6.879e-17      0.000   5.13e-13      1.000      -0.000       0.000
x7          1.588e-13   4.04e-05   3.93e-09      1.000   -7.92e-05    7.92e-05
x8            -0.0002   9.77e-06    -20.395      0.000      -0.000      -0.000
x9          3.877e-14      0.001   6.24e-11      1.000      -0.001       0.001
x10         -7.41e-05      0.001     -0.129      0.897      -0.001       0.001
x11            0.0003   4.91e-05      6.926      0.000       0.000       0.000
x12           -0.0004   7.27e-05     -5.556      0.000      -0.001      -0.000
x13        -2.679e-14   3.39e-05   -7.9e-10      1.000   -6.65e-05    6.65e-05
x14          2.97e-13      0.000   2.31e-09      1.000      -0.000       0.000
x15         1.602e-12   7.47e-05   2.14e-08      1.000      -0.000       0.000
x16        -8.756e-13   4.29e-05  -2.04e-08      1.000   -8.41e-05    8.41e-05
x17         1.793e-12   6.56e-05   2.74e-08      1.000      -0.000       0.000
x18        -1.019e-13      0.000  -5.54e-10      1.000      -0.000       0.000
x19        -1.077e-12   8.29e-05   -1.3e-08      1.000      -0.000       0.000
x20         1.771e-13   8.45e-05    2.1e-09      1.000      -0.000       0.000
x21         9.233e-16      0.000   1.94e-12      1.000      -0.001       0.001
ar.L1         -0.2857      0.000  -2747.572      0.000      -0.286      -0.285
ma.L1         -0.9142   7.12e-06  -1.28e+05      0.000      -0.914      -0.914
sigma2          1e-10      7e-11      1.429      0.153   -3.71e-11    2.37e-10
===================================================================================
Ljung-Box (L1) (Q):                  84.32   Jarque-Bera (JB):           4804295.53
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                             5.87
Prob(H) (two-sided):                  0.00   Kurtosis:                       381.28
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 2.06e+20. Standard errors may be unstable.
ARIMA order: (1, 3, 1) 

/usr/local/lib/python3.7/dist-packages/keras/optimizer_v2/adam.py:105: UserWarning: The `lr` argument is deprecated, use `learning_rate` instead.
  super(Adam, self).__init__(name, **kwargs)
Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.03167, saving model to LSTM6.h5
43/43 - 4s - loss: 0.1223 - accuracy: 0.0000e+00 - val_loss: 0.0317 - val_accuracy: 0.0037 - lr: 0.0010 - 4s/epoch - 97ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.03167
43/43 - 0s - loss: 0.0537 - accuracy: 0.0000e+00 - val_loss: 0.0551 - val_accuracy: 0.0037 - lr: 0.0010 - 213ms/epoch - 5ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.03167
43/43 - 0s - loss: 0.0270 - accuracy: 0.0000e+00 - val_loss: 0.0575 - val_accuracy: 0.0037 - lr: 0.0010 - 208ms/epoch - 5ms/step
Epoch 4/500

Epoch 00004: val_loss improved from 0.03167 to 0.01264, saving model to LSTM6.h5
43/43 - 0s - loss: 0.0175 - accuracy: 0.0000e+00 - val_loss: 0.0126 - val_accuracy: 0.0037 - lr: 0.0010 - 222ms/epoch - 5ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.01264
43/43 - 0s - loss: 0.0093 - accuracy: 0.0000e+00 - val_loss: 0.0307 - val_accuracy: 0.0037 - lr: 0.0010 - 214ms/epoch - 5ms/step
Epoch 6/500

Epoch 00006: val_loss improved from 0.01264 to 0.00368, saving model to LSTM6.h5
43/43 - 0s - loss: 0.0060 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 0.0010 - 242ms/epoch - 6ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.00368
43/43 - 0s - loss: 0.0029 - accuracy: 0.0000e+00 - val_loss: 0.0188 - val_accuracy: 0.0037 - lr: 0.0010 - 221ms/epoch - 5ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.00368
43/43 - 0s - loss: 0.0021 - accuracy: 0.0000e+00 - val_loss: 0.0043 - val_accuracy: 0.0037 - lr: 0.0010 - 218ms/epoch - 5ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.00368
43/43 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0129 - val_accuracy: 0.0037 - lr: 0.0010 - 217ms/epoch - 5ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.00368
43/43 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0057 - val_accuracy: 0.0037 - lr: 0.0010 - 225ms/epoch - 5ms/step
Epoch 11/500

Epoch 00011: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00011: val_loss did not improve from 0.00368
43/43 - 0s - loss: 9.6307e-04 - accuracy: 0.0000e+00 - val_loss: 0.0099 - val_accuracy: 0.0037 - lr: 0.0010 - 217ms/epoch - 5ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.00368
43/43 - 0s - loss: 0.0016 - accuracy: 0.0000e+00 - val_loss: 0.0056 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 204ms/epoch - 5ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.00368
43/43 - 0s - loss: 8.9181e-04 - accuracy: 0.0000e+00 - val_loss: 0.0055 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 196ms/epoch - 5ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.00368
43/43 - 0s - loss: 8.8964e-04 - accuracy: 0.0000e+00 - val_loss: 0.0050 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 207ms/epoch - 5ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.00368
43/43 - 0s - loss: 8.6453e-04 - accuracy: 0.0000e+00 - val_loss: 0.0048 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 235ms/epoch - 5ms/step
Epoch 16/500

Epoch 00016: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00016: val_loss did not improve from 0.00368
43/43 - 0s - loss: 8.5108e-04 - accuracy: 0.0000e+00 - val_loss: 0.0046 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 222ms/epoch - 5ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.00368
43/43 - 0s - loss: 8.2235e-04 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 204ms/epoch - 5ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.00368
43/43 - 0s - loss: 8.1948e-04 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 198ms/epoch - 5ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.00368
43/43 - 0s - loss: 8.1750e-04 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 205ms/epoch - 5ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.00368
43/43 - 0s - loss: 8.1615e-04 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 210ms/epoch - 5ms/step
Epoch 21/500

Epoch 00021: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00021: val_loss did not improve from 0.00368
43/43 - 0s - loss: 8.1509e-04 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 212ms/epoch - 5ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.00368
43/43 - 0s - loss: 8.1416e-04 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 217ms/epoch - 5ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.00368
43/43 - 0s - loss: 8.1329e-04 - accuracy: 0.0000e+00 - val_loss: 0.0044 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 221ms/epoch - 5ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.00368
43/43 - 0s - loss: 8.1243e-04 - accuracy: 0.0000e+00 - val_loss: 0.0044 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 207ms/epoch - 5ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.00368
43/43 - 0s - loss: 8.1158e-04 - accuracy: 0.0000e+00 - val_loss: 0.0044 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 217ms/epoch - 5ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.00368
43/43 - 0s - loss: 8.1073e-04 - accuracy: 0.0000e+00 - val_loss: 0.0044 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 209ms/epoch - 5ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.00368
43/43 - 0s - loss: 8.0986e-04 - accuracy: 0.0000e+00 - val_loss: 0.0044 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 197ms/epoch - 5ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.00368
43/43 - 0s - loss: 8.0898e-04 - accuracy: 0.0000e+00 - val_loss: 0.0044 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 206ms/epoch - 5ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.00368
43/43 - 0s - loss: 8.0808e-04 - accuracy: 0.0000e+00 - val_loss: 0.0044 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 228ms/epoch - 5ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.00368
43/43 - 0s - loss: 8.0717e-04 - accuracy: 0.0000e+00 - val_loss: 0.0044 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 211ms/epoch - 5ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.00368
43/43 - 0s - loss: 8.0625e-04 - accuracy: 0.0000e+00 - val_loss: 0.0043 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 202ms/epoch - 5ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.00368
43/43 - 0s - loss: 8.0531e-04 - accuracy: 0.0000e+00 - val_loss: 0.0043 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 198ms/epoch - 5ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.00368
43/43 - 0s - loss: 8.0436e-04 - accuracy: 0.0000e+00 - val_loss: 0.0043 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 218ms/epoch - 5ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.00368
43/43 - 0s - loss: 8.0339e-04 - accuracy: 0.0000e+00 - val_loss: 0.0043 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 224ms/epoch - 5ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.00368
43/43 - 0s - loss: 8.0241e-04 - accuracy: 0.0000e+00 - val_loss: 0.0043 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 223ms/epoch - 5ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.00368
43/43 - 0s - loss: 8.0142e-04 - accuracy: 0.0000e+00 - val_loss: 0.0043 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 214ms/epoch - 5ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.00368
43/43 - 0s - loss: 8.0041e-04 - accuracy: 0.0000e+00 - val_loss: 0.0043 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 212ms/epoch - 5ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.00368
43/43 - 0s - loss: 7.9940e-04 - accuracy: 0.0000e+00 - val_loss: 0.0043 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 197ms/epoch - 5ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.00368
43/43 - 0s - loss: 7.9837e-04 - accuracy: 0.0000e+00 - val_loss: 0.0043 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 215ms/epoch - 5ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.00368
43/43 - 0s - loss: 7.9733e-04 - accuracy: 0.0000e+00 - val_loss: 0.0042 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 204ms/epoch - 5ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.00368
43/43 - 0s - loss: 7.9628e-04 - accuracy: 0.0000e+00 - val_loss: 0.0042 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 211ms/epoch - 5ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.00368
43/43 - 0s - loss: 7.9523e-04 - accuracy: 0.0000e+00 - val_loss: 0.0042 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 206ms/epoch - 5ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.00368
43/43 - 0s - loss: 7.9416e-04 - accuracy: 0.0000e+00 - val_loss: 0.0042 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 229ms/epoch - 5ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.00368
43/43 - 0s - loss: 7.9308e-04 - accuracy: 0.0000e+00 - val_loss: 0.0042 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 216ms/epoch - 5ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.00368
43/43 - 0s - loss: 7.9200e-04 - accuracy: 0.0000e+00 - val_loss: 0.0042 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 211ms/epoch - 5ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.00368
43/43 - 0s - loss: 7.9091e-04 - accuracy: 0.0000e+00 - val_loss: 0.0042 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 199ms/epoch - 5ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.00368
43/43 - 0s - loss: 7.8981e-04 - accuracy: 0.0000e+00 - val_loss: 0.0042 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 216ms/epoch - 5ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.00368
43/43 - 0s - loss: 7.8870e-04 - accuracy: 0.0000e+00 - val_loss: 0.0042 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 220ms/epoch - 5ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.00368
43/43 - 0s - loss: 7.8759e-04 - accuracy: 0.0000e+00 - val_loss: 0.0041 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 218ms/epoch - 5ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.00368
43/43 - 0s - loss: 7.8647e-04 - accuracy: 0.0000e+00 - val_loss: 0.0041 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 198ms/epoch - 5ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.00368
43/43 - 0s - loss: 7.8535e-04 - accuracy: 0.0000e+00 - val_loss: 0.0041 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 211ms/epoch - 5ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.00368
43/43 - 0s - loss: 7.8422e-04 - accuracy: 0.0000e+00 - val_loss: 0.0041 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 226ms/epoch - 5ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.00368
43/43 - 0s - loss: 7.8309e-04 - accuracy: 0.0000e+00 - val_loss: 0.0041 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 222ms/epoch - 5ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.00368
43/43 - 0s - loss: 7.8195e-04 - accuracy: 0.0000e+00 - val_loss: 0.0041 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 214ms/epoch - 5ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.00368
43/43 - 0s - loss: 7.8081e-04 - accuracy: 0.0000e+00 - val_loss: 0.0041 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 217ms/epoch - 5ms/step
Epoch 56/500

Epoch 00056: val_loss did not improve from 0.00368
43/43 - 0s - loss: 7.7966e-04 - accuracy: 0.0000e+00 - val_loss: 0.0041 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 221ms/epoch - 5ms/step
Epoch 00056: early stopping
SMA
Prediction vs Close:		53.73% Accuracy
Prediction vs Prediction:	47.01% Accuracy
MSE:	 71.66891031373025 
RMSE:	 8.465749247038342 
MAPE:	 6.880610177712922

EMA
Prediction vs Close:		54.85% Accuracy
Prediction vs Prediction:	43.66% Accuracy
MSE:	 67.24711347278334 
RMSE:	 8.200433736869249 
MAPE:	 6.781803215137433

WMA
Prediction vs Close:		55.6% Accuracy
Prediction vs Prediction:	47.76% Accuracy
MSE:	 58.550384022767716 
RMSE:	 7.6518222681115455 
MAPE:	 6.1413991074844

DEMA
Prediction vs Close:		52.24% Accuracy
Prediction vs Prediction:	47.01% Accuracy
MSE:	 127.72881036471918 
RMSE:	 11.301717142307146 
MAPE:	 10.306940424406019

KAMA
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	48.51% Accuracy
MSE:	 43.855468679122 
RMSE:	 6.622346161227303 
MAPE:	 5.4751276749367985

MIDPOINT
Prediction vs Close:		51.87% Accuracy
Prediction vs Prediction:	45.15% Accuracy
MSE:	 67.44416314351042 
RMSE:	 8.212439536673035 
MAPE:	 6.768235104271493

T3
Prediction vs Close:		53.73% Accuracy
Prediction vs Prediction:	44.78% Accuracy
MSE:	 154.873959597027 
RMSE:	 12.444836664136135 
MAPE:	 10.329006454112236
TEMA
TEMA([input_arrays], [timeperiod=30])

Triple Exponential Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
9

Working on TEMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-16493.570, Time=2.72 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-15527.581, Time=7.39 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-16154.477, Time=7.18 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-15134.948, Time=6.74 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=-16538.454, Time=8.49 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=-16271.346, Time=2.19 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=-16350.992, Time=13.04 sec
/usr/local/lib/python3.7/dist-packages/statsmodels/tsa/statespace/sarimax.py:1906: RuntimeWarning: divide by zero encountered in reciprocal
  return np.roots(self.polynomial_reduced_ma)**-1
 ARIMA(2,3,2)(0,0,0)[0]             : AIC=-16200.149, Time=3.34 sec
 ARIMA(1,3,2)(0,0,0)[0]             : AIC=-16461.809, Time=16.02 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=-16384.147, Time=3.44 sec
 ARIMA(3,3,2)(0,0,0)[0]             : AIC=inf, Time=9.34 sec
 ARIMA(2,3,1)(0,0,0)[0] intercept   : AIC=-15110.164, Time=5.45 sec

Best model:  ARIMA(2,3,1)(0,0,0)[0]          
Total fit time: 85.353 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(2, 3, 1)   Log Likelihood                8294.227
Date:                Sun, 12 Dec 2021   AIC                         -16538.454
Time:                        17:14:34   BIC                         -16421.183
Sample:                             0   HQIC                        -16493.417
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1          3.591e-07      0.001      0.000      1.000      -0.002       0.002
x2            3.6e-07      0.002      0.000      1.000      -0.003       0.003
x3          3.611e-07      0.001      0.000      1.000      -0.002       0.002
x4             1.0000      0.000   2628.605      0.000       0.999       1.001
x5          3.432e-07      0.000      0.001      0.999      -0.001       0.001
x6          1.714e-07   4.05e-05      0.004      0.997   -7.91e-05    7.95e-05
x7          3.541e-07      0.001      0.000      1.000      -0.003       0.003
x8            -0.0002      0.000     -1.006      0.315      -0.001       0.000
x9         -7.559e-08      0.000     -0.000      1.000      -0.001       0.001
x10            0.0001      0.000      0.492      0.623      -0.000       0.001
x11           -0.0006      0.000     -2.697      0.007      -0.001      -0.000
x12            0.0005      0.000      1.741      0.082   -5.97e-05       0.001
x13           3.6e-07      0.000      0.002      0.999      -0.000       0.000
x14         1.003e-06      0.001      0.001      0.999      -0.002       0.002
x15         3.506e-07   7.16e-05      0.005      0.996      -0.000       0.000
x16         5.157e-07      0.000      0.005      0.996      -0.000       0.000
x17         3.516e-07   6.59e-05      0.005      0.996      -0.000       0.000
x18         1.166e-07      0.000      0.001      1.000      -0.000       0.000
x19         3.922e-07    7.5e-05      0.005      0.996      -0.000       0.000
x20         -3.64e-07      0.000     -0.002      0.999      -0.000       0.000
x21         4.458e-07      0.000      0.004      0.997      -0.000       0.000
ar.L1         -0.4019   4.12e-05  -9758.484      0.000      -0.402      -0.402
ar.L2         -0.1006   1.58e-05  -6360.873      0.000      -0.101      -0.101
ma.L1         -0.7963   8.45e-06  -9.43e+04      0.000      -0.796      -0.796
sigma2      9.048e-11    7.2e-11      1.257      0.209   -5.06e-11    2.32e-10
===================================================================================
Ljung-Box (L1) (Q):                  64.02   Jarque-Bera (JB):           4424775.47
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                             5.53
Prob(H) (two-sided):                  0.00   Kurtosis:                       366.04
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 1.02e+20. Standard errors may be unstable.
ARIMA order: (2, 3, 1) 

/usr/local/lib/python3.7/dist-packages/keras/optimizer_v2/adam.py:105: UserWarning: The `lr` argument is deprecated, use `learning_rate` instead.
  super(Adam, self).__init__(name, **kwargs)
Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.08025, saving model to LSTM6.h5
90/90 - 4s - loss: 0.1296 - accuracy: 0.0000e+00 - val_loss: 0.0802 - val_accuracy: 0.0037 - lr: 0.0010 - 4s/epoch - 46ms/step
Epoch 2/500

Epoch 00002: val_loss improved from 0.08025 to 0.00870, saving model to LSTM6.h5
90/90 - 0s - loss: 0.0263 - accuracy: 0.0000e+00 - val_loss: 0.0087 - val_accuracy: 0.0037 - lr: 0.0010 - 446ms/epoch - 5ms/step
Epoch 3/500

Epoch 00003: val_loss improved from 0.00870 to 0.00668, saving model to LSTM6.h5
90/90 - 0s - loss: 0.0346 - accuracy: 0.0000e+00 - val_loss: 0.0067 - val_accuracy: 0.0037 - lr: 0.0010 - 427ms/epoch - 5ms/step
Epoch 4/500

Epoch 00004: val_loss improved from 0.00668 to 0.00660, saving model to LSTM6.h5
90/90 - 0s - loss: 0.0286 - accuracy: 0.0000e+00 - val_loss: 0.0066 - val_accuracy: 0.0037 - lr: 0.0010 - 437ms/epoch - 5ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.00660
90/90 - 0s - loss: 0.0280 - accuracy: 0.0000e+00 - val_loss: 0.0074 - val_accuracy: 0.0037 - lr: 0.0010 - 419ms/epoch - 5ms/step
Epoch 6/500

Epoch 00006: val_loss did not improve from 0.00660
90/90 - 0s - loss: 0.0229 - accuracy: 0.0000e+00 - val_loss: 0.0081 - val_accuracy: 0.0037 - lr: 0.0010 - 417ms/epoch - 5ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.00660
90/90 - 0s - loss: 0.0201 - accuracy: 0.0000e+00 - val_loss: 0.0076 - val_accuracy: 0.0037 - lr: 0.0010 - 424ms/epoch - 5ms/step
Epoch 8/500

Epoch 00008: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00008: val_loss did not improve from 0.00660
90/90 - 0s - loss: 0.0167 - accuracy: 0.0000e+00 - val_loss: 0.0078 - val_accuracy: 0.0037 - lr: 0.0010 - 420ms/epoch - 5ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.00660
90/90 - 0s - loss: 0.0206 - accuracy: 0.0000e+00 - val_loss: 0.0484 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 435ms/epoch - 5ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.00660
90/90 - 0s - loss: 0.0047 - accuracy: 0.0000e+00 - val_loss: 0.0377 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 399ms/epoch - 4ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.00660
90/90 - 0s - loss: 0.0034 - accuracy: 0.0000e+00 - val_loss: 0.0311 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 427ms/epoch - 5ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.00660
90/90 - 0s - loss: 0.0024 - accuracy: 0.0000e+00 - val_loss: 0.0263 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 423ms/epoch - 5ms/step
Epoch 13/500

Epoch 00013: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00013: val_loss did not improve from 0.00660
90/90 - 0s - loss: 0.0018 - accuracy: 0.0000e+00 - val_loss: 0.0223 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 404ms/epoch - 4ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.00660
90/90 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0227 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 430ms/epoch - 5ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.00660
90/90 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0228 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 408ms/epoch - 5ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.00660
90/90 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0228 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 422ms/epoch - 5ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.00660
90/90 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0227 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 445ms/epoch - 5ms/step
Epoch 18/500

Epoch 00018: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00018: val_loss did not improve from 0.00660
90/90 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0224 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 403ms/epoch - 4ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.00660
90/90 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0221 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 416ms/epoch - 5ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.00660
90/90 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0217 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 403ms/epoch - 4ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.00660
90/90 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0213 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 419ms/epoch - 5ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.00660
90/90 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0208 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 410ms/epoch - 5ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.00660
90/90 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0204 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 412ms/epoch - 5ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.00660
90/90 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0199 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 425ms/epoch - 5ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.00660
90/90 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0194 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 424ms/epoch - 5ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.00660
90/90 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0189 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 410ms/epoch - 5ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.00660
90/90 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0185 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 413ms/epoch - 5ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.00660
90/90 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0180 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 453ms/epoch - 5ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.00660
90/90 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0175 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 396ms/epoch - 4ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.00660
90/90 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0170 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 408ms/epoch - 5ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.00660
90/90 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0165 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 414ms/epoch - 5ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.00660
90/90 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0160 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 400ms/epoch - 4ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.00660
90/90 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0155 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 445ms/epoch - 5ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.00660
90/90 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0150 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 417ms/epoch - 5ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.00660
90/90 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0145 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 447ms/epoch - 5ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.00660
90/90 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0140 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 430ms/epoch - 5ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.00660
90/90 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0135 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 417ms/epoch - 5ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.00660
90/90 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0130 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 426ms/epoch - 5ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.00660
90/90 - 0s - loss: 9.9877e-04 - accuracy: 0.0000e+00 - val_loss: 0.0125 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 418ms/epoch - 5ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.00660
90/90 - 0s - loss: 9.9247e-04 - accuracy: 0.0000e+00 - val_loss: 0.0121 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 448ms/epoch - 5ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.00660
90/90 - 0s - loss: 9.8666e-04 - accuracy: 0.0000e+00 - val_loss: 0.0116 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 440ms/epoch - 5ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.00660
90/90 - 0s - loss: 9.8126e-04 - accuracy: 0.0000e+00 - val_loss: 0.0112 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 438ms/epoch - 5ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.00660
90/90 - 0s - loss: 9.7624e-04 - accuracy: 0.0000e+00 - val_loss: 0.0107 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 428ms/epoch - 5ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.00660
90/90 - 0s - loss: 9.7154e-04 - accuracy: 0.0000e+00 - val_loss: 0.0103 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 420ms/epoch - 5ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.00660
90/90 - 0s - loss: 9.6712e-04 - accuracy: 0.0000e+00 - val_loss: 0.0099 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 411ms/epoch - 5ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.00660
90/90 - 0s - loss: 9.6294e-04 - accuracy: 0.0000e+00 - val_loss: 0.0095 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 432ms/epoch - 5ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.00660
90/90 - 0s - loss: 9.5896e-04 - accuracy: 0.0000e+00 - val_loss: 0.0091 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 429ms/epoch - 5ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.00660
90/90 - 0s - loss: 9.5515e-04 - accuracy: 0.0000e+00 - val_loss: 0.0087 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 408ms/epoch - 5ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.00660
90/90 - 0s - loss: 9.5147e-04 - accuracy: 0.0000e+00 - val_loss: 0.0083 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 439ms/epoch - 5ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.00660
90/90 - 0s - loss: 9.4791e-04 - accuracy: 0.0000e+00 - val_loss: 0.0080 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 420ms/epoch - 5ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.00660
90/90 - 0s - loss: 9.4442e-04 - accuracy: 0.0000e+00 - val_loss: 0.0076 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 421ms/epoch - 5ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.00660
90/90 - 0s - loss: 9.4099e-04 - accuracy: 0.0000e+00 - val_loss: 0.0073 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 425ms/epoch - 5ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.00660
90/90 - 0s - loss: 9.3761e-04 - accuracy: 0.0000e+00 - val_loss: 0.0070 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 427ms/epoch - 5ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.00660
90/90 - 0s - loss: 9.3426e-04 - accuracy: 0.0000e+00 - val_loss: 0.0067 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 444ms/epoch - 5ms/step
Epoch 00054: early stopping
SMA
Prediction vs Close:		53.73% Accuracy
Prediction vs Prediction:	47.01% Accuracy
MSE:	 71.66891031373025 
RMSE:	 8.465749247038342 
MAPE:	 6.880610177712922

EMA
Prediction vs Close:		54.85% Accuracy
Prediction vs Prediction:	43.66% Accuracy
MSE:	 67.24711347278334 
RMSE:	 8.200433736869249 
MAPE:	 6.781803215137433

WMA
Prediction vs Close:		55.6% Accuracy
Prediction vs Prediction:	47.76% Accuracy
MSE:	 58.550384022767716 
RMSE:	 7.6518222681115455 
MAPE:	 6.1413991074844

DEMA
Prediction vs Close:		52.24% Accuracy
Prediction vs Prediction:	47.01% Accuracy
MSE:	 127.72881036471918 
RMSE:	 11.301717142307146 
MAPE:	 10.306940424406019

KAMA
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	48.51% Accuracy
MSE:	 43.855468679122 
RMSE:	 6.622346161227303 
MAPE:	 5.4751276749367985

MIDPOINT
Prediction vs Close:		51.87% Accuracy
Prediction vs Prediction:	45.15% Accuracy
MSE:	 67.44416314351042 
RMSE:	 8.212439536673035 
MAPE:	 6.768235104271493

T3
Prediction vs Close:		53.73% Accuracy
Prediction vs Prediction:	44.78% Accuracy
MSE:	 154.873959597027 
RMSE:	 12.444836664136135 
MAPE:	 10.329006454112236

TEMA
Prediction vs Close:		50.75% Accuracy
Prediction vs Prediction:	47.01% Accuracy
MSE:	 163.5522149093863 
RMSE:	 12.788753454085597 
MAPE:	 11.455728191899736
Runtime: mins: 42.64405553298335

Architecture Used

In [ ]:
from google.colab import files
import cv2
uploaded = files.upload()
Upload widget is only available when the cell has been executed in the current browser session. Please rerun this cell to enable.
Saving Experiment6.png to Experiment6 (1).png
In [ ]:
imgfile = 'Experiment6'
img = cv2.imread('Experiment6.png')
plt.figure(figsize=(20,10))
plt.axis("off")
plt.title('LSTM Architecture '+imgfile,fontsize=18)
plt.imshow(img)
Out[ ]:
<matplotlib.image.AxesImage at 0x7fa52b8bb750>

Model Plots

In [81]:
with open('simulation6_data.json') as json_file:
    simulation6 = json.load(json_file)
fileimg = 'Experiment6'
In [82]:
for i in range(len(list(simulation6.keys()))):
  SIM = list(simulation6.keys())[i]
  plot_train(simulation6,SIM)
  plot_test(simulation6,SIM)
----- Train RMSE for SMA ----- 8.860010937586445
----- Train_MSE_LSTM for SMA ----- 78.49979381415143
----- Train MAE LSTM for SMA ----- 7.754632690589469
----- Test RMSE for SMA----- 8.465749247038342
----- Test_MSE_LSTM for SMA----- 71.66891031373025
----- Test_MAE_LSTM for SMA----- 6.880610177712922
----- Train RMSE for EMA ----- 10.185838832459739
----- Train_MSE_LSTM for EMA ----- 103.7513127208448
----- Train MAE LSTM for EMA ----- 9.021545289753416
----- Test RMSE for EMA----- 8.200433736869249
----- Test_MSE_LSTM for EMA----- 67.24711347278334
----- Test_MAE_LSTM for EMA----- 6.781803215137433
----- Train RMSE for WMA ----- 10.495797568969957
----- Train_MSE_LSTM for WMA ----- 110.16176660879566
----- Train MAE LSTM for WMA ----- 9.342000266276576
----- Test RMSE for WMA----- 7.6518222681115455
----- Test_MSE_LSTM for WMA----- 58.550384022767716
----- Test_MAE_LSTM for WMA----- 6.1413991074844
----- Train RMSE for DEMA ----- 12.196853963107756
----- Train_MSE_LSTM for DEMA ----- 148.76324659737736
----- Train MAE LSTM for DEMA ----- 10.958250443400392
----- Test RMSE for DEMA----- 11.301717142307146
----- Test_MSE_LSTM for DEMA----- 127.72881036471918
----- Test_MAE_LSTM for DEMA----- 10.306940424406019
----- Train RMSE for KAMA ----- 10.5711440318656
----- Train_MSE_LSTM for KAMA ----- 111.74908614244768
----- Train MAE LSTM for KAMA ----- 9.510206620484396
----- Test RMSE for KAMA----- 6.622346161227303
----- Test_MSE_LSTM for KAMA----- 43.855468679122
----- Test_MAE_LSTM for KAMA----- 5.4751276749367985
----- Train RMSE for MIDPOINT ----- 9.431256276275592
----- Train_MSE_LSTM for MIDPOINT ----- 88.94859494878776
----- Train MAE LSTM for MIDPOINT ----- 8.385353724073704
----- Test RMSE for MIDPOINT----- 8.212439536673035
----- Test_MSE_LSTM for MIDPOINT----- 67.44416314351042
----- Test_MAE_LSTM for MIDPOINT----- 6.768235104271493
----- Train RMSE for T3 ----- 12.0506886763047
----- Train_MSE_LSTM for T3 ----- 145.21909757321833
----- Train MAE LSTM for T3 ----- 10.839950464284371
----- Test RMSE for T3----- 12.444836664136135
----- Test_MSE_LSTM for T3----- 154.873959597027
----- Test_MAE_LSTM for T3----- 10.329006454112236
----- Train RMSE for TEMA ----- 7.410516533334238
----- Train_MSE_LSTM for TEMA ----- 54.915755290820094
----- Train MAE LSTM for TEMA ----- 5.083548934505515
----- Test RMSE for TEMA----- 12.788753454085597
----- Test_MSE_LSTM for TEMA----- 163.5522149093863
----- Test_MAE_LSTM for TEMA----- 11.455728191899736

Arima w Exogenous Variable Multistep MutiVariate LSTM Hybrid Model Experiment 7

In [ ]:
def get_arima_exog(dataframe,original_data, train_len, test_len):    
    

    # prepare train and test data for exogenous vr
    X_value = pd.DataFrame(low_vol.iloc[:, :])
    y_value = pd.DataFrame(low_vol.iloc[:, 3])
    X_scaler = MinMaxScaler(feature_range=(-1, 1))
    y_scaler = MinMaxScaler(feature_range=(-1, 1))
    X_scaler.fit(X_value)
    y_scaler.fit(y_value)
    X_scale_dataset = X_scaler.fit_transform(X_value)
    y_scale_dataset = y_scaler.fit_transform(y_value)
    # Get data and check shape
    # X, y, yc = get_X_y(X_scale_dataset, y_scale_dataset)#X will be of shape 224 X 3 X 21 (each 3 X 21 array will be 3 days' worth of data). yc will have the corresponding closing price value
    # pdb.set_trace()
    X_train, X_test, = split_train_test(X_scale_dataset)
    y_train, y_test, = split_train_test(y_scale_dataset)
    yc_train,yc_test = split_train_test(low_vol_data)
    yc = yc_test.values.tolist()
    y_train_list = y_train.flatten().tolist()
    y_test_list = y_test.flatten().tolist()
    # yc_train, yc_test, = split_train_test(original_data)
    index_train, index_test, = predict_index(dataset_final, X_train, n_steps_in, n_steps_out)

    # Initialize model
    model = auto_arima(y_train_list,exogenous  = X_train,trace=True, error_action='ignore', start_p=1,start_q=1,max_p=3,max_q=3,d=3,
            suppress_warnings=True,stepwise=True,seasonal=True)

      # Determine model parameters
    print(model.summary())
    model.fit(y_train_list,maxiter=200)
    order = model.get_params()['order']
    print('ARIMA order:', order, '\n')

      # Genereate predictions
    prediction = []
    for i in range(len(y_test_list)):
        model = pmdarima.ARIMA(order=order)
        model.fit(y_train_list)
        # print('working on', i+1, 'of', len(y_test), '-- ' + str(int(100 * (i + 1) / len(y_test))) + '% complete')

        prediction.append(model.predict()[0])
        y_train_list.append(y_test_list[i])

    predictionte = y_scaler.inverse_transform(np.array(prediction).reshape(-1,1))
    y_test_ = y_scaler.inverse_transform(np.array(y_test_list).reshape(-1,1))

    # Generate error data
    mse = mean_squared_error(yc_test, predictionte)
    rmse = mse ** 0.5
    mae = mean_absolute_error(y_test_ , predictionte )
    return yc,predictionte.flatten().tolist(), mse, rmse, mae
In [ ]:
def get_lstm(data,original_data, train_len, test_len,img_file,ma ,lstm_len=3):
    # prepare train and test data
    X_value = pd.DataFrame(data.iloc[:, :])
    y_value = pd.DataFrame(data.iloc[:, 3])
    X_scaler = MinMaxScaler(feature_range=(-1, 1))
    y_scaler = MinMaxScaler(feature_range=(-1, 1))
    X_scaler.fit(X_value)
    y_scaler.fit(y_value)
    # Get data and check shape
    X, y, yc = get_X_y(X_scale_dataset, y_scale_dataset)#X will be of shape 224 X 3 X 21 (each 3 X 21 array will be 3 days' worth of data). yc will have the corresponding closing price value
    # pdb.set_trace()
    X_train, X_test, = split_train_test(X)
    y_train, y_test, = split_train_test(y)
    # yc_train, yc_test, = split_train_test(original_data)
    index_train, index_test, = predict_index(dataset_final, X_train, n_steps_in, n_steps_out)
    det =20
    input_dim = X_train.shape[1]#3
    feature_size = X_train.shape[2]#24
    output_dim = y_train.shape[1]#1



    # Option 1
    # Set up & fit LSTM RNN
    # model = Sequential()
    # model.add(LSTM(256, activation='relu', kernel_initializer='he_normal', input_shape=(input_dim, feature_size)))
    # model.add(Dense(units=64,activation='relu'))
    # model.add(Dropout(0.5))
    # model.add(Dense(units=output_dim))
    # model.compile(optimizer=Adam(learning_rate = 0.001), loss='mse')

    # ## Common code
    # callbacks = [
    # EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    # ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    # ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    # fname1 = img_file+'.png'
    # tensorflow.keras.utils.plot_model(
    #     model, to_file=fname1, show_shapes=True, show_dtype=False,
    #     show_layer_names=True, expand_nested=False, dpi=96,
    #     layer_range=None, show_layer_activations=False
    # )
    # history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # # plot loss
    # fname2 = img_file+'-'+ma
    # plt.title(img_file+'-'+ma_' Loss')
    # plt.xlabel("Epochs")
    # plt.ylabel("Loss")
    # pyplot.plot(history.history['loss'], label='train')
    # pyplot.plot(history.history['val_loss'], label='validation')
    # pyplot.legend()
    # pyplot.savefig(fname2+'.png',dpi='figure')
    # pyplot.show()


    # # # option 2
    # model = Sequential()
    # model.add(Bidirectional(LSTM(units= 128), input_shape=(input_dim, feature_size)))
    # model.add(Dense(64))
    # model.add(Dense(units=output_dim))
    # model.compile(optimizer=Adam(lr = 0.001), loss='mean_squared_error', metrics=['accuracy'])
    # # Common code
    # callbacks = [
    # EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    # ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    # ModelCheckpoint('LSTM7.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    # fname1 = img_file+'.png'
    # tensorflow.keras.utils.plot_model(
    #     model, to_file=fname1, show_shapes=True, show_dtype=False,
    #     show_layer_names=True, expand_nested=False, dpi=96,
    #     layer_range=None, show_layer_activations=False
    # )
    # history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # # plot loss
    # fname2 = img_file+'-'+ma
    # plt.title(img_file+'-'+ma+' Loss')
    # plt.xlabel("Epochs")
    # plt.ylabel("Loss")
    # pyplot.plot(history.history['loss'], label='train')
    # pyplot.plot(history.history['val_loss'], label='validation')
    # pyplot.legend()
    # pyplot.savefig(fname2+'.png',dpi='figure')
    # pyplot.show()

    # Option 3
    # define custom activation
    # 
    class Double_Tanh(Activation):
        def __init__(self, activation, **kwargs):
            super(Double_Tanh, self).__init__(activation, **kwargs)
            self.__name__ = 'double_tanh'

    def double_tanh(x):
        return (K.tanh(x) * 2)

    get_custom_objects().update({'double_tanh':Double_Tanh(double_tanh)})
        # Model Generation
    model = Sequential()
    #check https://machinelearningmastery.com/use-weight-regularization-lstm-networks-time-series-forecasting/
    model.add(LSTM(25, input_shape=(input_dim, feature_size), dropout=0.2, kernel_regularizer=l1_l2(0.00,0.00), bias_regularizer=l1_l2(0.00,0.00)))
    model.add(Dense(1))
    model.add(Activation(double_tanh))
    model.compile(loss='mean_squared_error', optimizer='adam', metrics=['mse', 'mae'])
    # Common code
    callbacks = [
    EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    ModelCheckpoint('LSTM7.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    fname1 = img_file+'.png'
    tensorflow.keras.utils.plot_model(
        model, to_file=fname1, show_shapes=True, show_dtype=False,
        show_layer_names=True, expand_nested=False, dpi=96,
        layer_range=None, show_layer_activations=False
    )
    history = model.fit(X_train, y_train, epochs=500, batch_size=int( optimized_period[ma]), verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # plot loss
    fname2 = img_file+'-'+ma
    plt.title(img_file+'-'+ma+' Loss')
    plt.xlabel("Epochs")
    plt.ylabel("Loss")
    pyplot.plot(history.history['loss'], label='train')
    pyplot.plot(history.history['val_loss'], label='validation')
    pyplot.legend()
    pyplot.savefig(fname2+'.png',dpi='figure')
    pyplot.show()

    # Option 4
    # Set up & fit LSTM RNN
    # model = Sequential()
    # model.add(LSTM(units=lstm_len, return_sequences=True, input_shape=(x_train.shape[1], 1)))
    # model.add(LSTM(units=int(lstm_len/2)))
    # model.add(Dense(1, activation='sigmoid'))
    # model.compile(loss='mean_squared_error', optimizer='adam')
    # # Common code
    # callbacks = [
    # EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    # ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    # ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    # fname1 = img_file+'.png'
    # tensorflow.keras.utils.plot_model(
    #     model, to_file=fname1, show_shapes=True, show_dtype=False,
    #     show_layer_names=True, expand_nested=False, dpi=96,
    #     layer_range=None, show_layer_activations=False
    # )
    # history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # # plot loss
    # fname2 = img_file+'-'+ma
    # plt.title(img_file+'-'+ma_' Loss')
    # plt.xlabel("Epochs")
    # plt.ylabel("Loss")
    # pyplot.plot(history.history['loss'], label='train')
    # pyplot.plot(history.history['val_loss'], label='validation')
    # pyplot.legend()
    # pyplot.savefig(fname2+'.png',dpi='figure')
    # pyplot.show()



    # Generate predictions
    predictiontr = model.predict(X_train, verbose=0)
    predictiontr = y_scaler.inverse_transform(predictiontr).tolist()
    outputtr = []
    for i in range(len(predictiontr)):
        outputtr.extend(predictiontr[i])
    predictiontr = outputtr
    # Generate error data

    ## replace with yc , xtest generated by new multistep method
    mse_tr = mean_squared_error(y_train, predictiontr)
    rmse_tr = mse_tr ** 0.5
    # mape = mean_absolute_percentage_error(X_test, pd.Series(predictiontr))
    mae_tr = mean_absolute_error(y_train, pd.Series(predictiontr))
    # Original_tr = pd.Series(yc_train)
    Original_tr = y_scaler.inverse_transform(y_train).flatten().tolist()


    predictionte = model.predict(X_test, verbose=0)
    predictionte = (y_scaler.inverse_transform(predictionte)-det).tolist()
    outputte = []
    for i in range(len(predictionte)):
        outputte.extend(predictionte[i])
    predictionte = outputte
    # Generate error data

    mse_te = mean_squared_error(y_test, predictionte)
    rmse_te = mse_te ** 0.5
    # mape = mean_absolute_percentage_error(X_test, pd.Series(predictionte))
    mae_te = mean_absolute_error(y_test, pd.Series(predictionte))
    # Original_te = pd.Series(yc_test)
    Original_te = y_scaler.inverse_transform(y_test).flatten().tolist()

    return Original_tr, predictiontr, mse_tr, rmse_tr,mae_tr,Original_te,predictionte, mse_te,rmse_te,mae_te
In [ ]:
if __name__ == '__main__':
    start_time = timeit.default_timer()
    simulation7 = {}
    imgfile = 'Experiment7'
    for ma in optimized_period:
                print(ma)
                print(functions[ma])
                print ( int( optimized_period[ma]))
              # if ma == 'SMA':
                low_vol = df.apply(lambda c:  functions[ma](c, timeperiod = int( optimized_period[ma])))
                low_vol = low_vol.fillna(0)
                low_vol_data = df['close']
                high_vol = pd.DataFrame()
                df2 = df.copy()
                for i in df2.columns:
                  if i in low_vol.columns:
                    high_vol[i] = df2[i].subtract(low_vol[i], fill_value=0)
                high_vol_data = df['close']
                ## *****************************************************
                # Generate ARIMA and LSTM predictions
                print('\nWorking on ' + ma + ' predictions')
                try:
                  print('parameters used : ', train_len, test_len)
                  low_vol_Original, low_vol_prediction, low_vol_mse, low_vol_rmse,low_vol_mae = get_arima_exog(low_vol,low_vol_data, train_len, test_len)
                except:
                    print('ARIMA error, skipping to next MA type')
                    continue
                Original_tr, predictiontr, mse_tr, rmse_tr,mae_tr,high_vol_Original, high_vol_prediction, high_vol_mse, high_vol_rmse,high_vol_mae, = get_lstm(high_vol,high_vol_data, train_len, test_len,imgfile,ma)
                final_prediction_tr = df['close'].head(train_len).values + pd.Series(predictiontr) # ignoring first 3 steps 
                mse_ftr = mean_squared_error(df['close'].head(train_len).values,final_prediction_tr.values)
                rmse_ftr = mse_ftr ** 0.5
                mape_ftr = mean_absolute_percentage_error(df['close'].head(train_len).reset_index(drop=True), final_prediction_tr)
                mae_ftr = mean_absolute_error(df['close'].head(train_len).reset_index(drop=True), final_prediction_tr)

                final_prediction = pd.Series(low_vol_prediction[3:]) + pd.Series(high_vol_prediction)
                mse = mean_squared_error(df['close'].tail(test_len).values,final_prediction.values)
                rmse = mse ** 0.5
                mape = mean_absolute_percentage_error(df['close'].tail(test_len).reset_index(drop=True), final_prediction)
                mae = mean_absolute_error(df['close'].tail(test_len).reset_index(drop=True), final_prediction)
                # Generate prediction accuracy
                actual = df['close'].tail(test_len).values
                result_1 = []
                result_2 = []
                for i in range(1, len(final_prediction)):
                    # Compare prediction to previous close price
                    if final_prediction[i] > actual[i-1] and actual[i] > actual[i-1]:
                        result_1.append(1)
                    elif final_prediction[i] < actual[i-1] and actual[i] < actual[i-1]:
                        result_1.append(1)
                    else:
                        result_1.append(0)

                    # Compare prediction to previous prediction
                    if final_prediction[i] > final_prediction[i-1] and actual[i] > actual[i-1]:
                        result_2.append(1)
                    elif final_prediction[i] < final_prediction[i-1] and actual[i] < actual[i-1]:
                        result_2.append(1)
                    else:
                        result_2.append(0)

                accuracy_1 = np.mean(result_1)
                accuracy_2 = np.mean(result_2)

                simulation7[ma] = {'low_vol': {'original':list(low_vol_Original), 'prediction': list(low_vol_prediction) , 'mse': low_vol_mse,
                                              'rmse': low_vol_rmse, 'mae' : low_vol_mae},
                                  'high_vol': {'original':list(high_vol_Original),'prediction': list(high_vol_prediction), 'mse': high_vol_mse,
                                              'rmse': high_vol_rmse, 'mae' : high_vol_mae},
                                  'final_tr': {'original':df['close'].head(train_len).tolist(),'prediction': final_prediction_tr.values.tolist(), 'mse': mse_ftr,
                                              'rmse': rmse_ftr, 'mae' : mae_ftr},
                                  'final': {'original': df['close'].tail(test_len).tolist(), 'prediction': final_prediction.values.tolist(), 'mse': mse,
                                            'rmse': rmse, 'mae': mae },
                                  'accuracy': {'prediction vs close': accuracy_1, 'prediction vs prediction': accuracy_2}}

                # save simulation data here as checkpoint
                with open('simulation7_data.json', 'w') as fp:
                    json.dump(simulation7, fp)

                for ma in simulation7.keys():
                    print('\n' + ma)
                    print('Prediction vs Close:\t\t' + str(round(100*simulation7[ma]['accuracy']['prediction vs close'], 2))
                          + '% Accuracy')
                    print('Prediction vs Prediction:\t' + str(round(100*simulation7[ma]['accuracy']['prediction vs prediction'], 2))
                          + '% Accuracy')
                    print('MSE:\t', simulation7[ma]['final']['mse'],
                          '\nRMSE:\t', simulation7[ma]['final']['rmse'],
                          '\nMAPE:\t', simulation7[ma]['final']['mae'])#,
                          # '\nMAPE:\t', simulation[ma]['final']['mape'])
              # else:
              #   break
    elapsed = timeit.default_timer() - start_time
    print('Runtime: mins:',elapsed/60)
SMA
SMA([input_arrays], [timeperiod=30])

Simple Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
17

Working on SMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-15000.708, Time=8.16 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-13492.284, Time=2.28 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-15827.971, Time=8.01 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-13635.197, Time=9.85 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=-14132.778, Time=3.78 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=-15140.312, Time=9.83 sec
 ARIMA(1,3,0)(0,0,0)[0] intercept   : AIC=-13970.469, Time=7.14 sec

Best model:  ARIMA(1,3,0)(0,0,0)[0]          
Total fit time: 49.071 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(1, 3, 0)   Log Likelihood                7936.985
Date:                Sun, 12 Dec 2021   AIC                         -15827.971
Time:                        15:03:36   BIC                         -15720.081
Sample:                             0   HQIC                        -15786.537
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1         -4.786e-05      0.001     -0.066      0.947      -0.001       0.001
x2         -4.789e-05      0.001     -0.085      0.932      -0.001       0.001
x3         -4.819e-05      0.000     -0.105      0.917      -0.001       0.001
x4             1.0000      0.001   1557.248      0.000       0.999       1.001
x5         -4.579e-05      0.001     -0.071      0.943      -0.001       0.001
x6          -5.16e-05      0.000     -0.432      0.666      -0.000       0.000
x7         -4.778e-05      0.000     -0.278      0.781      -0.000       0.000
x8            -0.0012      0.000     -7.403      0.000      -0.002      -0.001
x9         -3.454e-06      0.002     -0.002      0.998      -0.003       0.003
x10           -0.0005      0.001     -0.403      0.687      -0.003       0.002
x11            0.0029      0.000     10.904      0.000       0.002       0.003
x12           -0.0003      0.000     -1.815      0.069      -0.001    2.06e-05
x13        -4.809e-05      0.000     -0.157      0.875      -0.001       0.001
x14           -0.0001      0.000     -0.482      0.630      -0.001       0.000
x15        -5.214e-05      0.000     -0.273      0.785      -0.000       0.000
x16        -4.468e-05      0.000     -0.125      0.901      -0.001       0.001
x17        -4.224e-05      0.000     -0.202      0.840      -0.000       0.000
x18        -8.086e-05      0.000     -0.270      0.787      -0.001       0.001
x19        -5.537e-05      0.000     -0.244      0.807      -0.000       0.000
x20         8.423e-05      0.000      0.333      0.739      -0.000       0.001
x21        -4.232e-05      0.000     -0.166      0.868      -0.001       0.000
ar.L1         -0.6666   6.03e-06  -1.11e+05      0.000      -0.667      -0.667
sigma2      4.093e-10   8.97e-11      4.563      0.000    2.33e-10    5.85e-10
===================================================================================
Ljung-Box (L1) (Q):                  60.24   Jarque-Bera (JB):           1334882.31
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.11   Skew:                            -3.81
Prob(H) (two-sided):                  0.00   Kurtosis:                       202.35
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 5.73e+20. Standard errors may be unstable.
ARIMA order: (1, 3, 0) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.70185, saving model to LSTM7.h5
48/48 - 2s - loss: 0.2036 - mse: 0.2036 - mae: 0.3350 - val_loss: 0.7018 - val_mse: 0.7018 - val_mae: 0.7934 - lr: 0.0010 - 2s/epoch - 49ms/step
Epoch 2/500

Epoch 00002: val_loss improved from 0.70185 to 0.56641, saving model to LSTM7.h5
48/48 - 0s - loss: 0.0343 - mse: 0.0343 - mae: 0.1490 - val_loss: 0.5664 - val_mse: 0.5664 - val_mae: 0.7092 - lr: 0.0010 - 199ms/epoch - 4ms/step
Epoch 3/500

Epoch 00003: val_loss improved from 0.56641 to 0.49672, saving model to LSTM7.h5
48/48 - 0s - loss: 0.0187 - mse: 0.0187 - mae: 0.1079 - val_loss: 0.4967 - val_mse: 0.4967 - val_mae: 0.6625 - lr: 0.0010 - 185ms/epoch - 4ms/step
Epoch 4/500

Epoch 00004: val_loss improved from 0.49672 to 0.46591, saving model to LSTM7.h5
48/48 - 0s - loss: 0.0145 - mse: 0.0145 - mae: 0.0925 - val_loss: 0.4659 - val_mse: 0.4659 - val_mae: 0.6413 - lr: 0.0010 - 219ms/epoch - 5ms/step
Epoch 5/500

Epoch 00005: val_loss improved from 0.46591 to 0.45875, saving model to LSTM7.h5
48/48 - 0s - loss: 0.0116 - mse: 0.0116 - mae: 0.0842 - val_loss: 0.4588 - val_mse: 0.4588 - val_mae: 0.6361 - lr: 0.0010 - 204ms/epoch - 4ms/step
Epoch 6/500

Epoch 00006: val_loss improved from 0.45875 to 0.41503, saving model to LSTM7.h5
48/48 - 0s - loss: 0.0116 - mse: 0.0116 - mae: 0.0841 - val_loss: 0.4150 - val_mse: 0.4150 - val_mae: 0.6044 - lr: 0.0010 - 193ms/epoch - 4ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.41503
48/48 - 0s - loss: 0.0093 - mse: 0.0093 - mae: 0.0759 - val_loss: 0.4435 - val_mse: 0.4435 - val_mae: 0.6265 - lr: 0.0010 - 191ms/epoch - 4ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.41503
48/48 - 0s - loss: 0.0117 - mse: 0.0117 - mae: 0.0835 - val_loss: 0.4200 - val_mse: 0.4200 - val_mae: 0.6104 - lr: 0.0010 - 180ms/epoch - 4ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.41503
48/48 - 0s - loss: 0.0085 - mse: 0.0085 - mae: 0.0701 - val_loss: 0.4324 - val_mse: 0.4324 - val_mae: 0.6203 - lr: 0.0010 - 180ms/epoch - 4ms/step
Epoch 10/500

Epoch 00010: val_loss improved from 0.41503 to 0.39748, saving model to LSTM7.h5
48/48 - 0s - loss: 0.0092 - mse: 0.0092 - mae: 0.0748 - val_loss: 0.3975 - val_mse: 0.3975 - val_mae: 0.5940 - lr: 0.0010 - 189ms/epoch - 4ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.39748
48/48 - 0s - loss: 0.0094 - mse: 0.0094 - mae: 0.0744 - val_loss: 0.4509 - val_mse: 0.4509 - val_mae: 0.6359 - lr: 0.0010 - 178ms/epoch - 4ms/step
Epoch 12/500

Epoch 00012: val_loss improved from 0.39748 to 0.39172, saving model to LSTM7.h5
48/48 - 0s - loss: 0.0101 - mse: 0.0101 - mae: 0.0778 - val_loss: 0.3917 - val_mse: 0.3917 - val_mae: 0.5904 - lr: 0.0010 - 208ms/epoch - 4ms/step
Epoch 13/500

Epoch 00013: val_loss improved from 0.39172 to 0.36768, saving model to LSTM7.h5
48/48 - 0s - loss: 0.0090 - mse: 0.0090 - mae: 0.0729 - val_loss: 0.3677 - val_mse: 0.3677 - val_mae: 0.5712 - lr: 0.0010 - 207ms/epoch - 4ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.36768
48/48 - 0s - loss: 0.0108 - mse: 0.0108 - mae: 0.0778 - val_loss: 0.3685 - val_mse: 0.3685 - val_mae: 0.5724 - lr: 0.0010 - 176ms/epoch - 4ms/step
Epoch 15/500

Epoch 00015: val_loss improved from 0.36768 to 0.35888, saving model to LSTM7.h5
48/48 - 0s - loss: 0.0106 - mse: 0.0106 - mae: 0.0798 - val_loss: 0.3589 - val_mse: 0.3589 - val_mae: 0.5645 - lr: 0.0010 - 189ms/epoch - 4ms/step
Epoch 16/500

Epoch 00016: val_loss improved from 0.35888 to 0.35117, saving model to LSTM7.h5
48/48 - 0s - loss: 0.0105 - mse: 0.0105 - mae: 0.0780 - val_loss: 0.3512 - val_mse: 0.3512 - val_mae: 0.5583 - lr: 0.0010 - 194ms/epoch - 4ms/step
Epoch 17/500

Epoch 00017: val_loss improved from 0.35117 to 0.33856, saving model to LSTM7.h5
48/48 - 0s - loss: 0.0112 - mse: 0.0112 - mae: 0.0819 - val_loss: 0.3386 - val_mse: 0.3386 - val_mae: 0.5473 - lr: 0.0010 - 202ms/epoch - 4ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.33856
48/48 - 0s - loss: 0.0122 - mse: 0.0122 - mae: 0.0852 - val_loss: 0.3443 - val_mse: 0.3443 - val_mae: 0.5520 - lr: 0.0010 - 195ms/epoch - 4ms/step
Epoch 19/500

Epoch 00019: val_loss improved from 0.33856 to 0.29144, saving model to LSTM7.h5
48/48 - 0s - loss: 0.0119 - mse: 0.0119 - mae: 0.0870 - val_loss: 0.2914 - val_mse: 0.2914 - val_mae: 0.5037 - lr: 0.0010 - 187ms/epoch - 4ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.29144
48/48 - 0s - loss: 0.0148 - mse: 0.0148 - mae: 0.0952 - val_loss: 0.3920 - val_mse: 0.3920 - val_mae: 0.5911 - lr: 0.0010 - 182ms/epoch - 4ms/step
Epoch 21/500

Epoch 00021: val_loss improved from 0.29144 to 0.25381, saving model to LSTM7.h5
48/48 - 0s - loss: 0.0138 - mse: 0.0138 - mae: 0.0940 - val_loss: 0.2538 - val_mse: 0.2538 - val_mae: 0.4667 - lr: 0.0010 - 192ms/epoch - 4ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.25381
48/48 - 0s - loss: 0.0150 - mse: 0.0150 - mae: 0.0965 - val_loss: 0.3521 - val_mse: 0.3521 - val_mae: 0.5574 - lr: 0.0010 - 193ms/epoch - 4ms/step
Epoch 23/500

Epoch 00023: val_loss improved from 0.25381 to 0.21639, saving model to LSTM7.h5
48/48 - 0s - loss: 0.0136 - mse: 0.0136 - mae: 0.0928 - val_loss: 0.2164 - val_mse: 0.2164 - val_mae: 0.4260 - lr: 0.0010 - 218ms/epoch - 5ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.21639
48/48 - 0s - loss: 0.0137 - mse: 0.0137 - mae: 0.0955 - val_loss: 0.3752 - val_mse: 0.3752 - val_mae: 0.5750 - lr: 0.0010 - 202ms/epoch - 4ms/step
Epoch 25/500

Epoch 00025: val_loss improved from 0.21639 to 0.17944, saving model to LSTM7.h5
48/48 - 0s - loss: 0.0119 - mse: 0.0119 - mae: 0.0876 - val_loss: 0.1794 - val_mse: 0.1794 - val_mae: 0.3819 - lr: 0.0010 - 193ms/epoch - 4ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.17944
48/48 - 0s - loss: 0.0105 - mse: 0.0105 - mae: 0.0824 - val_loss: 0.3447 - val_mse: 0.3447 - val_mae: 0.5489 - lr: 0.0010 - 194ms/epoch - 4ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.17944
48/48 - 0s - loss: 0.0091 - mse: 0.0091 - mae: 0.0763 - val_loss: 0.1934 - val_mse: 0.1934 - val_mae: 0.3992 - lr: 0.0010 - 187ms/epoch - 4ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.17944
48/48 - 0s - loss: 0.0090 - mse: 0.0090 - mae: 0.0761 - val_loss: 0.3444 - val_mse: 0.3444 - val_mae: 0.5488 - lr: 0.0010 - 198ms/epoch - 4ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.17944
48/48 - 0s - loss: 0.0087 - mse: 0.0087 - mae: 0.0738 - val_loss: 0.2230 - val_mse: 0.2230 - val_mae: 0.4327 - lr: 0.0010 - 185ms/epoch - 4ms/step
Epoch 30/500

Epoch 00030: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00030: val_loss did not improve from 0.17944
48/48 - 0s - loss: 0.0087 - mse: 0.0087 - mae: 0.0735 - val_loss: 0.3052 - val_mse: 0.3052 - val_mae: 0.5142 - lr: 0.0010 - 183ms/epoch - 4ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.17944
48/48 - 0s - loss: 0.0137 - mse: 0.0137 - mae: 0.0957 - val_loss: 0.2431 - val_mse: 0.2431 - val_mae: 0.4553 - lr: 1.0000e-04 - 182ms/epoch - 4ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.17944
48/48 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0563 - val_loss: 0.2285 - val_mse: 0.2285 - val_mae: 0.4405 - lr: 1.0000e-04 - 178ms/epoch - 4ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.17944
48/48 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0569 - val_loss: 0.2234 - val_mse: 0.2234 - val_mae: 0.4351 - lr: 1.0000e-04 - 188ms/epoch - 4ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.17944
48/48 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0556 - val_loss: 0.2208 - val_mse: 0.2208 - val_mae: 0.4323 - lr: 1.0000e-04 - 187ms/epoch - 4ms/step
Epoch 35/500

Epoch 00035: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00035: val_loss did not improve from 0.17944
48/48 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0566 - val_loss: 0.2211 - val_mse: 0.2211 - val_mae: 0.4326 - lr: 1.0000e-04 - 183ms/epoch - 4ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.17944
48/48 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0527 - val_loss: 0.2209 - val_mse: 0.2209 - val_mae: 0.4324 - lr: 1.0000e-05 - 191ms/epoch - 4ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.17944
48/48 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0510 - val_loss: 0.2209 - val_mse: 0.2209 - val_mae: 0.4325 - lr: 1.0000e-05 - 177ms/epoch - 4ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.17944
48/48 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0536 - val_loss: 0.2207 - val_mse: 0.2207 - val_mae: 0.4322 - lr: 1.0000e-05 - 193ms/epoch - 4ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.17944
48/48 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0522 - val_loss: 0.2206 - val_mse: 0.2206 - val_mae: 0.4321 - lr: 1.0000e-05 - 180ms/epoch - 4ms/step
Epoch 40/500

Epoch 00040: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00040: val_loss did not improve from 0.17944
48/48 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0522 - val_loss: 0.2209 - val_mse: 0.2209 - val_mae: 0.4325 - lr: 1.0000e-05 - 186ms/epoch - 4ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.17944
48/48 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0543 - val_loss: 0.2209 - val_mse: 0.2209 - val_mae: 0.4325 - lr: 1.0000e-05 - 180ms/epoch - 4ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.17944
48/48 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0536 - val_loss: 0.2205 - val_mse: 0.2205 - val_mae: 0.4320 - lr: 1.0000e-05 - 185ms/epoch - 4ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.17944
48/48 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0520 - val_loss: 0.2204 - val_mse: 0.2204 - val_mae: 0.4320 - lr: 1.0000e-05 - 189ms/epoch - 4ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.17944
48/48 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0520 - val_loss: 0.2209 - val_mse: 0.2209 - val_mae: 0.4325 - lr: 1.0000e-05 - 184ms/epoch - 4ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.17944
48/48 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0535 - val_loss: 0.2209 - val_mse: 0.2209 - val_mae: 0.4324 - lr: 1.0000e-05 - 204ms/epoch - 4ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.17944
48/48 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0547 - val_loss: 0.2206 - val_mse: 0.2206 - val_mae: 0.4322 - lr: 1.0000e-05 - 186ms/epoch - 4ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.17944
48/48 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0522 - val_loss: 0.2206 - val_mse: 0.2206 - val_mae: 0.4321 - lr: 1.0000e-05 - 189ms/epoch - 4ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.17944
48/48 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0522 - val_loss: 0.2208 - val_mse: 0.2208 - val_mae: 0.4323 - lr: 1.0000e-05 - 182ms/epoch - 4ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.17944
48/48 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0517 - val_loss: 0.2212 - val_mse: 0.2212 - val_mae: 0.4327 - lr: 1.0000e-05 - 188ms/epoch - 4ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.17944
48/48 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0528 - val_loss: 0.2217 - val_mse: 0.2217 - val_mae: 0.4333 - lr: 1.0000e-05 - 175ms/epoch - 4ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.17944
48/48 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0512 - val_loss: 0.2219 - val_mse: 0.2219 - val_mae: 0.4335 - lr: 1.0000e-05 - 179ms/epoch - 4ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.17944
48/48 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0512 - val_loss: 0.2216 - val_mse: 0.2216 - val_mae: 0.4332 - lr: 1.0000e-05 - 182ms/epoch - 4ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.17944
48/48 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0539 - val_loss: 0.2210 - val_mse: 0.2210 - val_mae: 0.4325 - lr: 1.0000e-05 - 179ms/epoch - 4ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.17944
48/48 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0517 - val_loss: 0.2211 - val_mse: 0.2211 - val_mae: 0.4326 - lr: 1.0000e-05 - 191ms/epoch - 4ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.17944
48/48 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0524 - val_loss: 0.2217 - val_mse: 0.2217 - val_mae: 0.4332 - lr: 1.0000e-05 - 179ms/epoch - 4ms/step
Epoch 56/500

Epoch 00056: val_loss did not improve from 0.17944
48/48 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0515 - val_loss: 0.2219 - val_mse: 0.2219 - val_mae: 0.4335 - lr: 1.0000e-05 - 175ms/epoch - 4ms/step
Epoch 57/500

Epoch 00057: val_loss did not improve from 0.17944
48/48 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0540 - val_loss: 0.2216 - val_mse: 0.2216 - val_mae: 0.4331 - lr: 1.0000e-05 - 197ms/epoch - 4ms/step
Epoch 58/500

Epoch 00058: val_loss did not improve from 0.17944
48/48 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0512 - val_loss: 0.2218 - val_mse: 0.2218 - val_mae: 0.4333 - lr: 1.0000e-05 - 190ms/epoch - 4ms/step
Epoch 59/500

Epoch 00059: val_loss did not improve from 0.17944
48/48 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0535 - val_loss: 0.2208 - val_mse: 0.2208 - val_mae: 0.4323 - lr: 1.0000e-05 - 189ms/epoch - 4ms/step
Epoch 60/500

Epoch 00060: val_loss did not improve from 0.17944
48/48 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0519 - val_loss: 0.2203 - val_mse: 0.2203 - val_mae: 0.4318 - lr: 1.0000e-05 - 186ms/epoch - 4ms/step
Epoch 61/500

Epoch 00061: val_loss did not improve from 0.17944
48/48 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0515 - val_loss: 0.2200 - val_mse: 0.2200 - val_mae: 0.4314 - lr: 1.0000e-05 - 187ms/epoch - 4ms/step
Epoch 62/500

Epoch 00062: val_loss did not improve from 0.17944
48/48 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0528 - val_loss: 0.2196 - val_mse: 0.2196 - val_mae: 0.4310 - lr: 1.0000e-05 - 181ms/epoch - 4ms/step
Epoch 63/500

Epoch 00063: val_loss did not improve from 0.17944
48/48 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0524 - val_loss: 0.2199 - val_mse: 0.2199 - val_mae: 0.4314 - lr: 1.0000e-05 - 184ms/epoch - 4ms/step
Epoch 64/500

Epoch 00064: val_loss did not improve from 0.17944
48/48 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0527 - val_loss: 0.2198 - val_mse: 0.2198 - val_mae: 0.4313 - lr: 1.0000e-05 - 203ms/epoch - 4ms/step
Epoch 65/500

Epoch 00065: val_loss did not improve from 0.17944
48/48 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0530 - val_loss: 0.2209 - val_mse: 0.2209 - val_mae: 0.4324 - lr: 1.0000e-05 - 190ms/epoch - 4ms/step
Epoch 66/500

Epoch 00066: val_loss did not improve from 0.17944
48/48 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0513 - val_loss: 0.2213 - val_mse: 0.2213 - val_mae: 0.4328 - lr: 1.0000e-05 - 186ms/epoch - 4ms/step
Epoch 67/500

Epoch 00067: val_loss did not improve from 0.17944
48/48 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0524 - val_loss: 0.2211 - val_mse: 0.2211 - val_mae: 0.4326 - lr: 1.0000e-05 - 186ms/epoch - 4ms/step
Epoch 68/500

Epoch 00068: val_loss did not improve from 0.17944
48/48 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0519 - val_loss: 0.2217 - val_mse: 0.2217 - val_mae: 0.4332 - lr: 1.0000e-05 - 190ms/epoch - 4ms/step
Epoch 69/500

Epoch 00069: val_loss did not improve from 0.17944
48/48 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0526 - val_loss: 0.2219 - val_mse: 0.2219 - val_mae: 0.4335 - lr: 1.0000e-05 - 194ms/epoch - 4ms/step
Epoch 70/500

Epoch 00070: val_loss did not improve from 0.17944
48/48 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0535 - val_loss: 0.2221 - val_mse: 0.2221 - val_mae: 0.4336 - lr: 1.0000e-05 - 200ms/epoch - 4ms/step
Epoch 71/500

Epoch 00071: val_loss did not improve from 0.17944
48/48 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0532 - val_loss: 0.2232 - val_mse: 0.2232 - val_mae: 0.4348 - lr: 1.0000e-05 - 184ms/epoch - 4ms/step
Epoch 72/500

Epoch 00072: val_loss did not improve from 0.17944
48/48 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0507 - val_loss: 0.2233 - val_mse: 0.2233 - val_mae: 0.4349 - lr: 1.0000e-05 - 179ms/epoch - 4ms/step
Epoch 73/500

Epoch 00073: val_loss did not improve from 0.17944
48/48 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0509 - val_loss: 0.2219 - val_mse: 0.2219 - val_mae: 0.4334 - lr: 1.0000e-05 - 194ms/epoch - 4ms/step
Epoch 74/500

Epoch 00074: val_loss did not improve from 0.17944
48/48 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0511 - val_loss: 0.2232 - val_mse: 0.2232 - val_mae: 0.4348 - lr: 1.0000e-05 - 186ms/epoch - 4ms/step
Epoch 75/500

Epoch 00075: val_loss did not improve from 0.17944
48/48 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0516 - val_loss: 0.2239 - val_mse: 0.2239 - val_mae: 0.4356 - lr: 1.0000e-05 - 201ms/epoch - 4ms/step
Epoch 00075: early stopping
SMA
Prediction vs Close:		51.87% Accuracy
Prediction vs Prediction:	48.88% Accuracy
MSE:	 37.148636819682245 
RMSE:	 6.094968155756209 
MAPE:	 5.090179331518223
EMA
EMA([input_arrays], [timeperiod=30])

Exponential Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
51

Working on EMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-17005.775, Time=2.31 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-14574.593, Time=3.91 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-16801.081, Time=8.99 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-14572.593, Time=5.31 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=-14532.068, Time=7.29 sec
 ARIMA(1,3,2)(0,0,0)[0]             : AIC=-15610.472, Time=12.83 sec
 ARIMA(0,3,2)(0,0,0)[0]             : AIC=-16103.302, Time=13.06 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=-17030.021, Time=4.15 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=-17004.614, Time=3.48 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=-17087.441, Time=6.61 sec
 ARIMA(3,3,2)(0,0,0)[0]             : AIC=inf, Time=16.57 sec
/usr/local/lib/python3.7/dist-packages/statsmodels/tsa/statespace/sarimax.py:1906: RuntimeWarning: divide by zero encountered in reciprocal
  return np.roots(self.polynomial_reduced_ma)**-1
 ARIMA(2,3,2)(0,0,0)[0]             : AIC=-17003.984, Time=3.00 sec
 ARIMA(3,3,1)(0,0,0)[0] intercept   : AIC=-16998.666, Time=3.68 sec

Best model:  ARIMA(3,3,1)(0,0,0)[0]          
Total fit time: 91.197 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 1)   Log Likelihood                8569.720
Date:                Sun, 12 Dec 2021   AIC                         -17087.441
Time:                        15:06:04   BIC                         -16965.479
Sample:                             0   HQIC                        -17040.602
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1         -2.316e-10   9.87e-21  -2.35e+10      0.000   -2.32e-10   -2.32e-10
x2         -2.308e-10   9.85e-21  -2.34e+10      0.000   -2.31e-10   -2.31e-10
x3         -2.324e-10   9.88e-21  -2.35e+10      0.000   -2.32e-10   -2.32e-10
x4             1.0000   9.87e-21   1.01e+20      0.000       1.000       1.000
x5         -2.106e-10   9.41e-21  -2.24e+10      0.000   -2.11e-10   -2.11e-10
x6         -7.996e-10   1.74e-20  -4.59e+10      0.000      -8e-10      -8e-10
x7         -2.295e-10   9.82e-21  -2.34e+10      0.000   -2.29e-10   -2.29e-10
x8         -2.244e-10   9.71e-21  -2.31e+10      0.000   -2.24e-10   -2.24e-10
x9         -1.166e-11   1.98e-21   -5.9e+09      0.000   -1.17e-11   -1.17e-11
x10        -4.453e-11   4.22e-21  -1.06e+10      0.000   -4.45e-11   -4.45e-11
x11        -2.219e-10   9.65e-21   -2.3e+10      0.000   -2.22e-10   -2.22e-10
x12        -2.264e-10   9.76e-21  -2.32e+10      0.000   -2.26e-10   -2.26e-10
x13        -2.315e-10   9.87e-21  -2.35e+10      0.000   -2.31e-10   -2.31e-10
x14        -1.766e-09   2.73e-20  -6.48e+10      0.000   -1.77e-09   -1.77e-09
x15        -2.167e-10   9.37e-21  -2.31e+10      0.000   -2.17e-10   -2.17e-10
x16        -5.232e-10   1.49e-20  -3.52e+10      0.000   -5.23e-10   -5.23e-10
x17        -2.147e-10   9.48e-21  -2.27e+10      0.000   -2.15e-10   -2.15e-10
x18        -3.791e-11   3.96e-21  -9.56e+09      0.000   -3.79e-11   -3.79e-11
x19        -2.597e-10   1.05e-20  -2.48e+10      0.000    -2.6e-10    -2.6e-10
x20        -2.417e-10      1e-20  -2.41e+10      0.000   -2.42e-10   -2.42e-10
x21        -4.823e-10    1.4e-20  -3.44e+10      0.000   -4.82e-10   -4.82e-10
ar.L1         -0.4923   1.46e-22  -3.38e+21      0.000      -0.492      -0.492
ar.L2         -0.1923   8.47e-23  -2.27e+21      0.000      -0.192      -0.192
ar.L3         -0.0462   4.02e-23  -1.15e+21      0.000      -0.046      -0.046
ma.L1         -0.7077   3.31e-22  -2.14e+21      0.000      -0.708      -0.708
sigma2       8.99e-11   6.95e-11      1.293      0.196   -4.64e-11    2.26e-10
===================================================================================
Ljung-Box (L1) (Q):                  54.09   Jarque-Bera (JB):           4207353.17
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                             5.48
Prob(H) (two-sided):                  0.00   Kurtosis:                       357.00
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 3.15e+40. Standard errors may be unstable.
ARIMA order: (3, 3, 1) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.05006, saving model to LSTM7.h5
16/16 - 2s - loss: 0.2215 - mse: 0.2215 - mae: 0.3350 - val_loss: 0.0501 - val_mse: 0.0501 - val_mae: 0.1733 - lr: 0.0010 - 2s/epoch - 134ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.05006
16/16 - 0s - loss: 0.0689 - mse: 0.0689 - mae: 0.2211 - val_loss: 0.0511 - val_mse: 0.0511 - val_mae: 0.1710 - lr: 0.0010 - 78ms/epoch - 5ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.05006
16/16 - 0s - loss: 0.0288 - mse: 0.0288 - mae: 0.1337 - val_loss: 0.0510 - val_mse: 0.0510 - val_mae: 0.1698 - lr: 0.0010 - 87ms/epoch - 5ms/step
Epoch 4/500

Epoch 00004: val_loss improved from 0.05006 to 0.04261, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0260 - mse: 0.0260 - mae: 0.1281 - val_loss: 0.0426 - val_mse: 0.0426 - val_mae: 0.1669 - lr: 0.0010 - 96ms/epoch - 6ms/step
Epoch 5/500

Epoch 00005: val_loss improved from 0.04261 to 0.04178, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0194 - mse: 0.0194 - mae: 0.1136 - val_loss: 0.0418 - val_mse: 0.0418 - val_mae: 0.1637 - lr: 0.0010 - 96ms/epoch - 6ms/step
Epoch 6/500

Epoch 00006: val_loss improved from 0.04178 to 0.04063, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0179 - mse: 0.0179 - mae: 0.1068 - val_loss: 0.0406 - val_mse: 0.0406 - val_mae: 0.1600 - lr: 0.0010 - 88ms/epoch - 6ms/step
Epoch 7/500

Epoch 00007: val_loss improved from 0.04063 to 0.04050, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0148 - mse: 0.0148 - mae: 0.0963 - val_loss: 0.0405 - val_mse: 0.0405 - val_mae: 0.1560 - lr: 0.0010 - 85ms/epoch - 5ms/step
Epoch 8/500

Epoch 00008: val_loss improved from 0.04050 to 0.03978, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0142 - mse: 0.0142 - mae: 0.0945 - val_loss: 0.0398 - val_mse: 0.0398 - val_mae: 0.1534 - lr: 0.0010 - 95ms/epoch - 6ms/step
Epoch 9/500

Epoch 00009: val_loss improved from 0.03978 to 0.03869, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0134 - mse: 0.0134 - mae: 0.0928 - val_loss: 0.0387 - val_mse: 0.0387 - val_mae: 0.1508 - lr: 0.0010 - 84ms/epoch - 5ms/step
Epoch 10/500

Epoch 00010: val_loss improved from 0.03869 to 0.03820, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0116 - mse: 0.0116 - mae: 0.0863 - val_loss: 0.0382 - val_mse: 0.0382 - val_mae: 0.1486 - lr: 0.0010 - 86ms/epoch - 5ms/step
Epoch 11/500

Epoch 00011: val_loss improved from 0.03820 to 0.03602, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0112 - mse: 0.0112 - mae: 0.0833 - val_loss: 0.0360 - val_mse: 0.0360 - val_mae: 0.1464 - lr: 0.0010 - 89ms/epoch - 6ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.03602
16/16 - 0s - loss: 0.0106 - mse: 0.0106 - mae: 0.0824 - val_loss: 0.0366 - val_mse: 0.0366 - val_mae: 0.1438 - lr: 0.0010 - 72ms/epoch - 4ms/step
Epoch 13/500

Epoch 00013: val_loss improved from 0.03602 to 0.03489, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0110 - mse: 0.0110 - mae: 0.0830 - val_loss: 0.0349 - val_mse: 0.0349 - val_mae: 0.1405 - lr: 0.0010 - 92ms/epoch - 6ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.03489
16/16 - 0s - loss: 0.0096 - mse: 0.0096 - mae: 0.0762 - val_loss: 0.0361 - val_mse: 0.0361 - val_mae: 0.1390 - lr: 0.0010 - 86ms/epoch - 5ms/step
Epoch 15/500

Epoch 00015: val_loss improved from 0.03489 to 0.03484, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0093 - mse: 0.0093 - mae: 0.0761 - val_loss: 0.0348 - val_mse: 0.0348 - val_mae: 0.1364 - lr: 0.0010 - 101ms/epoch - 6ms/step
Epoch 16/500

Epoch 00016: val_loss improved from 0.03484 to 0.03461, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0088 - mse: 0.0088 - mae: 0.0755 - val_loss: 0.0346 - val_mse: 0.0346 - val_mae: 0.1346 - lr: 0.0010 - 88ms/epoch - 5ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.03461
16/16 - 0s - loss: 0.0085 - mse: 0.0085 - mae: 0.0727 - val_loss: 0.0363 - val_mse: 0.0363 - val_mae: 0.1351 - lr: 0.0010 - 80ms/epoch - 5ms/step
Epoch 18/500

Epoch 00018: val_loss improved from 0.03461 to 0.03449, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0080 - mse: 0.0080 - mae: 0.0710 - val_loss: 0.0345 - val_mse: 0.0345 - val_mae: 0.1322 - lr: 0.0010 - 109ms/epoch - 7ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.03449
16/16 - 0s - loss: 0.0074 - mse: 0.0074 - mae: 0.0681 - val_loss: 0.0351 - val_mse: 0.0351 - val_mae: 0.1324 - lr: 0.0010 - 76ms/epoch - 5ms/step
Epoch 20/500

Epoch 00020: val_loss improved from 0.03449 to 0.03434, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0079 - mse: 0.0079 - mae: 0.0711 - val_loss: 0.0343 - val_mse: 0.0343 - val_mae: 0.1310 - lr: 0.0010 - 87ms/epoch - 5ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.03434
16/16 - 0s - loss: 0.0068 - mse: 0.0068 - mae: 0.0649 - val_loss: 0.0361 - val_mse: 0.0361 - val_mae: 0.1337 - lr: 0.0010 - 77ms/epoch - 5ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.03434
16/16 - 0s - loss: 0.0074 - mse: 0.0074 - mae: 0.0685 - val_loss: 0.0365 - val_mse: 0.0365 - val_mae: 0.1346 - lr: 0.0010 - 74ms/epoch - 5ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.03434
16/16 - 0s - loss: 0.0066 - mse: 0.0066 - mae: 0.0647 - val_loss: 0.0372 - val_mse: 0.0372 - val_mae: 0.1362 - lr: 0.0010 - 71ms/epoch - 4ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.03434
16/16 - 0s - loss: 0.0067 - mse: 0.0067 - mae: 0.0635 - val_loss: 0.0374 - val_mse: 0.0374 - val_mae: 0.1373 - lr: 0.0010 - 71ms/epoch - 4ms/step
Epoch 25/500

Epoch 00025: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00025: val_loss did not improve from 0.03434
16/16 - 0s - loss: 0.0066 - mse: 0.0066 - mae: 0.0636 - val_loss: 0.0378 - val_mse: 0.0378 - val_mae: 0.1387 - lr: 0.0010 - 84ms/epoch - 5ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.03434
16/16 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0588 - val_loss: 0.0375 - val_mse: 0.0375 - val_mae: 0.1379 - lr: 1.0000e-04 - 83ms/epoch - 5ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.03434
16/16 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0577 - val_loss: 0.0370 - val_mse: 0.0370 - val_mae: 0.1371 - lr: 1.0000e-04 - 78ms/epoch - 5ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.03434
16/16 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0587 - val_loss: 0.0369 - val_mse: 0.0369 - val_mae: 0.1368 - lr: 1.0000e-04 - 80ms/epoch - 5ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.03434
16/16 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0599 - val_loss: 0.0370 - val_mse: 0.0370 - val_mae: 0.1371 - lr: 1.0000e-04 - 78ms/epoch - 5ms/step
Epoch 30/500

Epoch 00030: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00030: val_loss did not improve from 0.03434
16/16 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0617 - val_loss: 0.0371 - val_mse: 0.0371 - val_mae: 0.1373 - lr: 1.0000e-04 - 72ms/epoch - 5ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.03434
16/16 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0612 - val_loss: 0.0371 - val_mse: 0.0371 - val_mae: 0.1373 - lr: 1.0000e-05 - 74ms/epoch - 5ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.03434
16/16 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0595 - val_loss: 0.0371 - val_mse: 0.0371 - val_mae: 0.1373 - lr: 1.0000e-05 - 75ms/epoch - 5ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.03434
16/16 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0611 - val_loss: 0.0371 - val_mse: 0.0371 - val_mae: 0.1374 - lr: 1.0000e-05 - 77ms/epoch - 5ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.03434
16/16 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0573 - val_loss: 0.0371 - val_mse: 0.0371 - val_mae: 0.1374 - lr: 1.0000e-05 - 75ms/epoch - 5ms/step
Epoch 35/500

Epoch 00035: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00035: val_loss did not improve from 0.03434
16/16 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0567 - val_loss: 0.0372 - val_mse: 0.0372 - val_mae: 0.1375 - lr: 1.0000e-05 - 76ms/epoch - 5ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.03434
16/16 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0604 - val_loss: 0.0372 - val_mse: 0.0372 - val_mae: 0.1376 - lr: 1.0000e-05 - 74ms/epoch - 5ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.03434
16/16 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0578 - val_loss: 0.0373 - val_mse: 0.0373 - val_mae: 0.1377 - lr: 1.0000e-05 - 86ms/epoch - 5ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.03434
16/16 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0577 - val_loss: 0.0373 - val_mse: 0.0373 - val_mae: 0.1378 - lr: 1.0000e-05 - 85ms/epoch - 5ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.03434
16/16 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0575 - val_loss: 0.0374 - val_mse: 0.0374 - val_mae: 0.1380 - lr: 1.0000e-05 - 83ms/epoch - 5ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.03434
16/16 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0586 - val_loss: 0.0374 - val_mse: 0.0374 - val_mae: 0.1381 - lr: 1.0000e-05 - 76ms/epoch - 5ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.03434
16/16 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0595 - val_loss: 0.0374 - val_mse: 0.0374 - val_mae: 0.1381 - lr: 1.0000e-05 - 79ms/epoch - 5ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.03434
16/16 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0579 - val_loss: 0.0375 - val_mse: 0.0375 - val_mae: 0.1382 - lr: 1.0000e-05 - 76ms/epoch - 5ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.03434
16/16 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0616 - val_loss: 0.0375 - val_mse: 0.0375 - val_mae: 0.1382 - lr: 1.0000e-05 - 85ms/epoch - 5ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.03434
16/16 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0569 - val_loss: 0.0375 - val_mse: 0.0375 - val_mae: 0.1383 - lr: 1.0000e-05 - 74ms/epoch - 5ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.03434
16/16 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0591 - val_loss: 0.0375 - val_mse: 0.0375 - val_mae: 0.1384 - lr: 1.0000e-05 - 78ms/epoch - 5ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.03434
16/16 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0589 - val_loss: 0.0376 - val_mse: 0.0376 - val_mae: 0.1384 - lr: 1.0000e-05 - 78ms/epoch - 5ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.03434
16/16 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0549 - val_loss: 0.0376 - val_mse: 0.0376 - val_mae: 0.1384 - lr: 1.0000e-05 - 77ms/epoch - 5ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.03434
16/16 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0573 - val_loss: 0.0375 - val_mse: 0.0375 - val_mae: 0.1384 - lr: 1.0000e-05 - 79ms/epoch - 5ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.03434
16/16 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0587 - val_loss: 0.0376 - val_mse: 0.0376 - val_mae: 0.1384 - lr: 1.0000e-05 - 87ms/epoch - 5ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.03434
16/16 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0585 - val_loss: 0.0376 - val_mse: 0.0376 - val_mae: 0.1385 - lr: 1.0000e-05 - 87ms/epoch - 5ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.03434
16/16 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0589 - val_loss: 0.0376 - val_mse: 0.0376 - val_mae: 0.1385 - lr: 1.0000e-05 - 84ms/epoch - 5ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.03434
16/16 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0574 - val_loss: 0.0376 - val_mse: 0.0376 - val_mae: 0.1386 - lr: 1.0000e-05 - 79ms/epoch - 5ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.03434
16/16 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0604 - val_loss: 0.0376 - val_mse: 0.0376 - val_mae: 0.1386 - lr: 1.0000e-05 - 80ms/epoch - 5ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.03434
16/16 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0590 - val_loss: 0.0377 - val_mse: 0.0377 - val_mae: 0.1387 - lr: 1.0000e-05 - 71ms/epoch - 4ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.03434
16/16 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0581 - val_loss: 0.0377 - val_mse: 0.0377 - val_mae: 0.1388 - lr: 1.0000e-05 - 76ms/epoch - 5ms/step
Epoch 56/500

Epoch 00056: val_loss did not improve from 0.03434
16/16 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0577 - val_loss: 0.0377 - val_mse: 0.0377 - val_mae: 0.1388 - lr: 1.0000e-05 - 77ms/epoch - 5ms/step
Epoch 57/500

Epoch 00057: val_loss did not improve from 0.03434
16/16 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0621 - val_loss: 0.0377 - val_mse: 0.0377 - val_mae: 0.1388 - lr: 1.0000e-05 - 83ms/epoch - 5ms/step
Epoch 58/500

Epoch 00058: val_loss did not improve from 0.03434
16/16 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0586 - val_loss: 0.0377 - val_mse: 0.0377 - val_mae: 0.1388 - lr: 1.0000e-05 - 75ms/epoch - 5ms/step
Epoch 59/500

Epoch 00059: val_loss did not improve from 0.03434
16/16 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0594 - val_loss: 0.0377 - val_mse: 0.0377 - val_mae: 0.1388 - lr: 1.0000e-05 - 77ms/epoch - 5ms/step
Epoch 60/500

Epoch 00060: val_loss did not improve from 0.03434
16/16 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0569 - val_loss: 0.0377 - val_mse: 0.0377 - val_mae: 0.1389 - lr: 1.0000e-05 - 83ms/epoch - 5ms/step
Epoch 61/500

Epoch 00061: val_loss did not improve from 0.03434
16/16 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0557 - val_loss: 0.0378 - val_mse: 0.0378 - val_mae: 0.1390 - lr: 1.0000e-05 - 85ms/epoch - 5ms/step
Epoch 62/500

Epoch 00062: val_loss did not improve from 0.03434
16/16 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0598 - val_loss: 0.0378 - val_mse: 0.0378 - val_mae: 0.1390 - lr: 1.0000e-05 - 81ms/epoch - 5ms/step
Epoch 63/500

Epoch 00063: val_loss did not improve from 0.03434
16/16 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0572 - val_loss: 0.0378 - val_mse: 0.0378 - val_mae: 0.1390 - lr: 1.0000e-05 - 82ms/epoch - 5ms/step
Epoch 64/500

Epoch 00064: val_loss did not improve from 0.03434
16/16 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0579 - val_loss: 0.0378 - val_mse: 0.0378 - val_mae: 0.1392 - lr: 1.0000e-05 - 85ms/epoch - 5ms/step
Epoch 65/500

Epoch 00065: val_loss did not improve from 0.03434
16/16 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0560 - val_loss: 0.0379 - val_mse: 0.0379 - val_mae: 0.1392 - lr: 1.0000e-05 - 79ms/epoch - 5ms/step
Epoch 66/500

Epoch 00066: val_loss did not improve from 0.03434
16/16 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0583 - val_loss: 0.0379 - val_mse: 0.0379 - val_mae: 0.1394 - lr: 1.0000e-05 - 73ms/epoch - 5ms/step
Epoch 67/500

Epoch 00067: val_loss did not improve from 0.03434
16/16 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0587 - val_loss: 0.0379 - val_mse: 0.0379 - val_mae: 0.1394 - lr: 1.0000e-05 - 78ms/epoch - 5ms/step
Epoch 68/500

Epoch 00068: val_loss did not improve from 0.03434
16/16 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0588 - val_loss: 0.0379 - val_mse: 0.0379 - val_mae: 0.1394 - lr: 1.0000e-05 - 79ms/epoch - 5ms/step
Epoch 69/500

Epoch 00069: val_loss did not improve from 0.03434
16/16 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0557 - val_loss: 0.0380 - val_mse: 0.0380 - val_mae: 0.1395 - lr: 1.0000e-05 - 79ms/epoch - 5ms/step
Epoch 70/500

Epoch 00070: val_loss did not improve from 0.03434
16/16 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0576 - val_loss: 0.0379 - val_mse: 0.0379 - val_mae: 0.1395 - lr: 1.0000e-05 - 74ms/epoch - 5ms/step
Epoch 00070: early stopping
SMA
Prediction vs Close:		51.87% Accuracy
Prediction vs Prediction:	48.88% Accuracy
MSE:	 37.148636819682245 
RMSE:	 6.094968155756209 
MAPE:	 5.090179331518223

EMA
Prediction vs Close:		55.22% Accuracy
Prediction vs Prediction:	47.39% Accuracy
MSE:	 31.582654654902484 
RMSE:	 5.619844718041815 
MAPE:	 4.507182634072088
WMA
WMA([input_arrays], [timeperiod=30])

Weighted Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
49

Working on WMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-14480.432, Time=9.44 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-15747.905, Time=6.44 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-15116.389, Time=7.16 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-13532.115, Time=7.81 sec
 ARIMA(0,3,0)(0,0,0)[0] intercept   : AIC=-13619.624, Time=5.40 sec

Best model:  ARIMA(0,3,0)(0,0,0)[0]          
Total fit time: 36.273 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(0, 3, 0)   Log Likelihood                7895.952
Date:                Sun, 12 Dec 2021   AIC                         -15747.905
Time:                        15:14:07   BIC                         -15644.706
Sample:                             0   HQIC                        -15708.272
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1          3.384e-05    1.9e-05      1.778      0.075   -3.47e-06    7.12e-05
x2          3.379e-05   1.84e-05      1.832      0.067   -2.35e-06    6.99e-05
x3          3.388e-05   4.34e-05      0.781      0.435   -5.12e-05       0.000
x4             1.0000   4.12e-06   2.43e+05      0.000       1.000       1.000
x5          3.227e-05   3.52e-06      9.163      0.000    2.54e-05    3.92e-05
x6          5.559e-05   6.75e-05      0.823      0.410   -7.67e-05       0.000
x7          3.369e-05   2.38e-05      1.415      0.157    -1.3e-05    8.03e-05
x8             0.0023    2.6e-05     86.661      0.000       0.002       0.002
x9          -8.72e-06   7.51e-07    -11.610      0.000   -1.02e-05   -7.25e-06
x10           -0.0023   3.33e-05    -67.770      0.000      -0.002      -0.002
x11            0.0093    2.8e-05    333.459      0.000       0.009       0.009
x12           -0.0118   2.37e-05   -498.171      0.000      -0.012      -0.012
x13         3.382e-05   1.49e-05      2.273      0.023    4.66e-06     6.3e-05
x14         9.271e-05   6.21e-05      1.493      0.135    -2.9e-05       0.000
x15         3.096e-05   1.92e-05      1.614      0.106   -6.63e-06    6.86e-05
x16          5.52e-05   7.17e-05      0.770      0.441   -8.53e-05       0.000
x17          3.38e-05    3.2e-05      1.056      0.291   -2.89e-05    9.65e-05
x18        -6.715e-06   8.34e-05     -0.081      0.936      -0.000       0.000
x19         3.428e-05   2.07e-05      1.654      0.098   -6.34e-06    7.49e-05
x20        -8.089e-06   9.55e-05     -0.085      0.933      -0.000       0.000
x21         4.255e-05      0.000      0.094      0.925      -0.001       0.001
sigma2      2.581e-10   7.87e-11      3.280      0.001    1.04e-10    4.12e-10
===================================================================================
Ljung-Box (L1) (Q):                 362.92   Jarque-Bera (JB):           5047564.68
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.01   Skew:                           -11.23
Prob(H) (two-sided):                  0.00   Kurtosis:                       390.28
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 1.75e+20. Standard errors may be unstable.
ARIMA order: (0, 3, 0) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.39083, saving model to LSTM7.h5
17/17 - 3s - loss: 0.8312 - mse: 0.8312 - mae: 0.7302 - val_loss: 0.3908 - val_mse: 0.3908 - val_mae: 0.5818 - lr: 0.0010 - 3s/epoch - 155ms/step
Epoch 2/500

Epoch 00002: val_loss improved from 0.39083 to 0.24821, saving model to LSTM7.h5
17/17 - 0s - loss: 0.0523 - mse: 0.0523 - mae: 0.1902 - val_loss: 0.2482 - val_mse: 0.2482 - val_mae: 0.4551 - lr: 0.0010 - 91ms/epoch - 5ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.24821
17/17 - 0s - loss: 0.0369 - mse: 0.0369 - mae: 0.1630 - val_loss: 0.3136 - val_mse: 0.3136 - val_mae: 0.5155 - lr: 0.0010 - 86ms/epoch - 5ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.24821
17/17 - 0s - loss: 0.0196 - mse: 0.0196 - mae: 0.1106 - val_loss: 0.3367 - val_mse: 0.3367 - val_mae: 0.5347 - lr: 0.0010 - 87ms/epoch - 5ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.24821
17/17 - 0s - loss: 0.0230 - mse: 0.0230 - mae: 0.1211 - val_loss: 0.2966 - val_mse: 0.2966 - val_mae: 0.4981 - lr: 0.0010 - 86ms/epoch - 5ms/step
Epoch 6/500

Epoch 00006: val_loss did not improve from 0.24821
17/17 - 0s - loss: 0.0189 - mse: 0.0189 - mae: 0.1090 - val_loss: 0.2815 - val_mse: 0.2815 - val_mae: 0.4831 - lr: 0.0010 - 90ms/epoch - 5ms/step
Epoch 7/500

Epoch 00007: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00007: val_loss did not improve from 0.24821
17/17 - 0s - loss: 0.0188 - mse: 0.0188 - mae: 0.1074 - val_loss: 0.3017 - val_mse: 0.3017 - val_mae: 0.5011 - lr: 0.0010 - 86ms/epoch - 5ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.24821
17/17 - 0s - loss: 0.0157 - mse: 0.0157 - mae: 0.0999 - val_loss: 0.3026 - val_mse: 0.3026 - val_mae: 0.5019 - lr: 1.0000e-04 - 82ms/epoch - 5ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.24821
17/17 - 0s - loss: 0.0178 - mse: 0.0178 - mae: 0.1052 - val_loss: 0.3013 - val_mse: 0.3013 - val_mae: 0.5006 - lr: 1.0000e-04 - 84ms/epoch - 5ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.24821
17/17 - 0s - loss: 0.0165 - mse: 0.0165 - mae: 0.1009 - val_loss: 0.3016 - val_mse: 0.3016 - val_mae: 0.5008 - lr: 1.0000e-04 - 80ms/epoch - 5ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.24821
17/17 - 0s - loss: 0.0165 - mse: 0.0165 - mae: 0.1024 - val_loss: 0.3005 - val_mse: 0.3005 - val_mae: 0.4998 - lr: 1.0000e-04 - 80ms/epoch - 5ms/step
Epoch 12/500

Epoch 00012: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00012: val_loss did not improve from 0.24821
17/17 - 0s - loss: 0.0168 - mse: 0.0168 - mae: 0.1029 - val_loss: 0.3009 - val_mse: 0.3009 - val_mae: 0.5000 - lr: 1.0000e-04 - 83ms/epoch - 5ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.24821
17/17 - 0s - loss: 0.0167 - mse: 0.0167 - mae: 0.0996 - val_loss: 0.3009 - val_mse: 0.3009 - val_mae: 0.5001 - lr: 1.0000e-05 - 79ms/epoch - 5ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.24821
17/17 - 0s - loss: 0.0164 - mse: 0.0164 - mae: 0.1001 - val_loss: 0.3009 - val_mse: 0.3009 - val_mae: 0.5000 - lr: 1.0000e-05 - 91ms/epoch - 5ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.24821
17/17 - 0s - loss: 0.0150 - mse: 0.0150 - mae: 0.0963 - val_loss: 0.3007 - val_mse: 0.3007 - val_mae: 0.4999 - lr: 1.0000e-05 - 94ms/epoch - 6ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.24821
17/17 - 0s - loss: 0.0170 - mse: 0.0170 - mae: 0.1022 - val_loss: 0.3007 - val_mse: 0.3007 - val_mae: 0.4999 - lr: 1.0000e-05 - 86ms/epoch - 5ms/step
Epoch 17/500

Epoch 00017: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00017: val_loss did not improve from 0.24821
17/17 - 0s - loss: 0.0160 - mse: 0.0160 - mae: 0.1002 - val_loss: 0.3010 - val_mse: 0.3010 - val_mae: 0.5002 - lr: 1.0000e-05 - 85ms/epoch - 5ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.24821
17/17 - 0s - loss: 0.0165 - mse: 0.0165 - mae: 0.1006 - val_loss: 0.3010 - val_mse: 0.3010 - val_mae: 0.5001 - lr: 1.0000e-05 - 85ms/epoch - 5ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.24821
17/17 - 0s - loss: 0.0158 - mse: 0.0158 - mae: 0.1004 - val_loss: 0.3009 - val_mse: 0.3009 - val_mae: 0.5000 - lr: 1.0000e-05 - 77ms/epoch - 5ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.24821
17/17 - 0s - loss: 0.0154 - mse: 0.0154 - mae: 0.0980 - val_loss: 0.3008 - val_mse: 0.3008 - val_mae: 0.4999 - lr: 1.0000e-05 - 86ms/epoch - 5ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.24821
17/17 - 0s - loss: 0.0162 - mse: 0.0162 - mae: 0.1005 - val_loss: 0.3008 - val_mse: 0.3008 - val_mae: 0.4999 - lr: 1.0000e-05 - 88ms/epoch - 5ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.24821
17/17 - 0s - loss: 0.0136 - mse: 0.0136 - mae: 0.0947 - val_loss: 0.3008 - val_mse: 0.3008 - val_mae: 0.4999 - lr: 1.0000e-05 - 80ms/epoch - 5ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.24821
17/17 - 0s - loss: 0.0166 - mse: 0.0166 - mae: 0.1007 - val_loss: 0.3006 - val_mse: 0.3006 - val_mae: 0.4997 - lr: 1.0000e-05 - 78ms/epoch - 5ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.24821
17/17 - 0s - loss: 0.0154 - mse: 0.0154 - mae: 0.0985 - val_loss: 0.3007 - val_mse: 0.3007 - val_mae: 0.4998 - lr: 1.0000e-05 - 81ms/epoch - 5ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.24821
17/17 - 0s - loss: 0.0150 - mse: 0.0150 - mae: 0.0966 - val_loss: 0.3008 - val_mse: 0.3008 - val_mae: 0.4998 - lr: 1.0000e-05 - 84ms/epoch - 5ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.24821
17/17 - 0s - loss: 0.0149 - mse: 0.0149 - mae: 0.0965 - val_loss: 0.3009 - val_mse: 0.3009 - val_mae: 0.5000 - lr: 1.0000e-05 - 91ms/epoch - 5ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.24821
17/17 - 0s - loss: 0.0169 - mse: 0.0169 - mae: 0.1035 - val_loss: 0.3006 - val_mse: 0.3006 - val_mae: 0.4997 - lr: 1.0000e-05 - 101ms/epoch - 6ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.24821
17/17 - 0s - loss: 0.0157 - mse: 0.0157 - mae: 0.0997 - val_loss: 0.3005 - val_mse: 0.3005 - val_mae: 0.4995 - lr: 1.0000e-05 - 93ms/epoch - 5ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.24821
17/17 - 0s - loss: 0.0153 - mse: 0.0153 - mae: 0.0973 - val_loss: 0.3007 - val_mse: 0.3007 - val_mae: 0.4997 - lr: 1.0000e-05 - 83ms/epoch - 5ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.24821
17/17 - 0s - loss: 0.0183 - mse: 0.0183 - mae: 0.1053 - val_loss: 0.3004 - val_mse: 0.3004 - val_mae: 0.4995 - lr: 1.0000e-05 - 83ms/epoch - 5ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.24821
17/17 - 0s - loss: 0.0154 - mse: 0.0154 - mae: 0.0975 - val_loss: 0.3003 - val_mse: 0.3003 - val_mae: 0.4994 - lr: 1.0000e-05 - 81ms/epoch - 5ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.24821
17/17 - 0s - loss: 0.0160 - mse: 0.0160 - mae: 0.0993 - val_loss: 0.3002 - val_mse: 0.3002 - val_mae: 0.4992 - lr: 1.0000e-05 - 84ms/epoch - 5ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.24821
17/17 - 0s - loss: 0.0162 - mse: 0.0162 - mae: 0.1020 - val_loss: 0.2999 - val_mse: 0.2999 - val_mae: 0.4990 - lr: 1.0000e-05 - 78ms/epoch - 5ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.24821
17/17 - 0s - loss: 0.0160 - mse: 0.0160 - mae: 0.1016 - val_loss: 0.3000 - val_mse: 0.3000 - val_mae: 0.4990 - lr: 1.0000e-05 - 81ms/epoch - 5ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.24821
17/17 - 0s - loss: 0.0154 - mse: 0.0154 - mae: 0.0968 - val_loss: 0.2997 - val_mse: 0.2997 - val_mae: 0.4987 - lr: 1.0000e-05 - 82ms/epoch - 5ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.24821
17/17 - 0s - loss: 0.0158 - mse: 0.0158 - mae: 0.0991 - val_loss: 0.2996 - val_mse: 0.2996 - val_mae: 0.4986 - lr: 1.0000e-05 - 84ms/epoch - 5ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.24821
17/17 - 0s - loss: 0.0153 - mse: 0.0153 - mae: 0.0971 - val_loss: 0.2996 - val_mse: 0.2996 - val_mae: 0.4986 - lr: 1.0000e-05 - 90ms/epoch - 5ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.24821
17/17 - 0s - loss: 0.0175 - mse: 0.0175 - mae: 0.1026 - val_loss: 0.2994 - val_mse: 0.2994 - val_mae: 0.4984 - lr: 1.0000e-05 - 90ms/epoch - 5ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.24821
17/17 - 0s - loss: 0.0152 - mse: 0.0152 - mae: 0.0980 - val_loss: 0.2994 - val_mse: 0.2994 - val_mae: 0.4984 - lr: 1.0000e-05 - 94ms/epoch - 6ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.24821
17/17 - 0s - loss: 0.0171 - mse: 0.0171 - mae: 0.1039 - val_loss: 0.2994 - val_mse: 0.2994 - val_mae: 0.4984 - lr: 1.0000e-05 - 94ms/epoch - 6ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.24821
17/17 - 0s - loss: 0.0144 - mse: 0.0144 - mae: 0.0951 - val_loss: 0.2999 - val_mse: 0.2999 - val_mae: 0.4988 - lr: 1.0000e-05 - 84ms/epoch - 5ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.24821
17/17 - 0s - loss: 0.0166 - mse: 0.0166 - mae: 0.1000 - val_loss: 0.3002 - val_mse: 0.3002 - val_mae: 0.4991 - lr: 1.0000e-05 - 85ms/epoch - 5ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.24821
17/17 - 0s - loss: 0.0158 - mse: 0.0158 - mae: 0.0997 - val_loss: 0.3001 - val_mse: 0.3001 - val_mae: 0.4990 - lr: 1.0000e-05 - 85ms/epoch - 5ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.24821
17/17 - 0s - loss: 0.0175 - mse: 0.0175 - mae: 0.1047 - val_loss: 0.3001 - val_mse: 0.3001 - val_mae: 0.4990 - lr: 1.0000e-05 - 82ms/epoch - 5ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.24821
17/17 - 0s - loss: 0.0160 - mse: 0.0160 - mae: 0.0972 - val_loss: 0.3002 - val_mse: 0.3002 - val_mae: 0.4991 - lr: 1.0000e-05 - 83ms/epoch - 5ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.24821
17/17 - 0s - loss: 0.0146 - mse: 0.0146 - mae: 0.0943 - val_loss: 0.3001 - val_mse: 0.3001 - val_mae: 0.4990 - lr: 1.0000e-05 - 79ms/epoch - 5ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.24821
17/17 - 0s - loss: 0.0157 - mse: 0.0157 - mae: 0.0962 - val_loss: 0.2999 - val_mse: 0.2999 - val_mae: 0.4988 - lr: 1.0000e-05 - 79ms/epoch - 5ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.24821
17/17 - 0s - loss: 0.0157 - mse: 0.0157 - mae: 0.1003 - val_loss: 0.3000 - val_mse: 0.3000 - val_mae: 0.4988 - lr: 1.0000e-05 - 96ms/epoch - 6ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.24821
17/17 - 0s - loss: 0.0155 - mse: 0.0155 - mae: 0.0977 - val_loss: 0.3002 - val_mse: 0.3002 - val_mae: 0.4990 - lr: 1.0000e-05 - 89ms/epoch - 5ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.24821
17/17 - 0s - loss: 0.0164 - mse: 0.0164 - mae: 0.0993 - val_loss: 0.3001 - val_mse: 0.3001 - val_mae: 0.4989 - lr: 1.0000e-05 - 85ms/epoch - 5ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.24821
17/17 - 0s - loss: 0.0161 - mse: 0.0161 - mae: 0.1021 - val_loss: 0.3002 - val_mse: 0.3002 - val_mae: 0.4990 - lr: 1.0000e-05 - 89ms/epoch - 5ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.24821
17/17 - 0s - loss: 0.0159 - mse: 0.0159 - mae: 0.0985 - val_loss: 0.3003 - val_mse: 0.3003 - val_mae: 0.4991 - lr: 1.0000e-05 - 87ms/epoch - 5ms/step
Epoch 00052: early stopping
SMA
Prediction vs Close:		51.87% Accuracy
Prediction vs Prediction:	48.88% Accuracy
MSE:	 37.148636819682245 
RMSE:	 6.094968155756209 
MAPE:	 5.090179331518223

EMA
Prediction vs Close:		55.22% Accuracy
Prediction vs Prediction:	47.39% Accuracy
MSE:	 31.582654654902484 
RMSE:	 5.619844718041815 
MAPE:	 4.507182634072088

WMA
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	48.88% Accuracy
MSE:	 65.00101872981564 
RMSE:	 8.062320926992156 
MAPE:	 6.705711592581163
DEMA
DEMA([input_arrays], [timeperiod=30])

Double Exponential Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
89

Working on DEMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-17005.774, Time=2.05 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-14574.593, Time=4.01 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-15590.302, Time=7.02 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-14572.593, Time=5.43 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=-15269.503, Time=7.16 sec
 ARIMA(1,3,2)(0,0,0)[0]             : AIC=-16414.961, Time=8.21 sec
 ARIMA(0,3,2)(0,0,0)[0]             : AIC=-16878.396, Time=9.90 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=-17030.019, Time=4.60 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=-17004.613, Time=3.06 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=-17087.441, Time=6.28 sec
 ARIMA(3,3,2)(0,0,0)[0]             : AIC=inf, Time=14.03 sec
/usr/local/lib/python3.7/dist-packages/statsmodels/tsa/statespace/sarimax.py:1906: RuntimeWarning: divide by zero encountered in reciprocal
  return np.roots(self.polynomial_reduced_ma)**-1
 ARIMA(2,3,2)(0,0,0)[0]             : AIC=-17003.985, Time=3.33 sec
 ARIMA(3,3,1)(0,0,0)[0] intercept   : AIC=-16998.665, Time=3.36 sec

Best model:  ARIMA(3,3,1)(0,0,0)[0]          
Total fit time: 78.451 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 1)   Log Likelihood                8569.721
Date:                Sun, 12 Dec 2021   AIC                         -17087.441
Time:                        15:16:00   BIC                         -16965.479
Sample:                             0   HQIC                        -17040.603
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1         -2.799e-10   1.43e-20  -1.96e+10      0.000    -2.8e-10    -2.8e-10
x2         -2.817e-10   1.43e-20  -1.97e+10      0.000   -2.82e-10   -2.82e-10
x3         -2.805e-10   1.43e-20  -1.96e+10      0.000    -2.8e-10    -2.8e-10
x4             1.0000   1.43e-20      7e+19      0.000       1.000       1.000
x5         -2.597e-10   1.37e-20  -1.89e+10      0.000    -2.6e-10    -2.6e-10
x6         -1.388e-09   3.12e-20  -4.45e+10      0.000   -1.39e-09   -1.39e-09
x7         -2.789e-10   1.42e-20  -1.96e+10      0.000   -2.79e-10   -2.79e-10
x8          -2.76e-10   1.42e-20  -1.95e+10      0.000   -2.76e-10   -2.76e-10
x9         -2.216e-12   3.53e-22  -6.28e+09      0.000   -2.22e-12   -2.22e-12
x10        -1.345e-10   9.82e-21  -1.37e+10      0.000   -1.34e-10   -1.34e-10
x11        -2.898e-10   1.45e-20     -2e+10      0.000    -2.9e-10    -2.9e-10
x12        -2.602e-10   1.38e-20  -1.89e+10      0.000    -2.6e-10    -2.6e-10
x13        -2.807e-10   1.43e-20  -1.96e+10      0.000   -2.81e-10   -2.81e-10
x14         -1.87e-09   3.69e-20  -5.07e+10      0.000   -1.87e-09   -1.87e-09
x15        -2.726e-10   1.43e-20   -1.9e+10      0.000   -2.73e-10   -2.73e-10
x16        -7.915e-11   7.68e-21  -1.03e+10      0.000   -7.92e-11   -7.92e-11
x17        -2.606e-10   1.33e-20  -1.96e+10      0.000   -2.61e-10   -2.61e-10
x18        -6.408e-10   2.16e-20  -2.97e+10      0.000   -6.41e-10   -6.41e-10
x19        -2.881e-10   1.46e-20  -1.98e+10      0.000   -2.88e-10   -2.88e-10
x20        -4.337e-10   1.78e-20  -2.44e+10      0.000   -4.34e-10   -4.34e-10
x21        -4.549e-10   1.79e-20  -2.55e+10      0.000   -4.55e-10   -4.55e-10
ar.L1         -0.4923   1.46e-22  -3.38e+21      0.000      -0.492      -0.492
ar.L2         -0.1923   8.47e-23  -2.27e+21      0.000      -0.192      -0.192
ar.L3         -0.0461   4.02e-23  -1.15e+21      0.000      -0.046      -0.046
ma.L1         -0.7078   3.31e-22  -2.14e+21      0.000      -0.708      -0.708
sigma2       8.99e-11   6.95e-11      1.293      0.196   -4.64e-11    2.26e-10
===================================================================================
Ljung-Box (L1) (Q):                  55.07   Jarque-Bera (JB):           4171695.82
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                             5.26
Prob(H) (two-sided):                  0.00   Kurtosis:                       355.51
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 2.62e+41. Standard errors may be unstable.
ARIMA order: (3, 3, 1) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.73965, saving model to LSTM7.h5
10/10 - 2s - loss: 0.1233 - mse: 0.1233 - mae: 0.2942 - val_loss: 0.7397 - val_mse: 0.7397 - val_mae: 0.8252 - lr: 0.0010 - 2s/epoch - 218ms/step
Epoch 2/500

Epoch 00002: val_loss improved from 0.73965 to 0.66869, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0684 - mse: 0.0684 - mae: 0.2238 - val_loss: 0.6687 - val_mse: 0.6687 - val_mae: 0.7838 - lr: 0.0010 - 64ms/epoch - 6ms/step
Epoch 3/500

Epoch 00003: val_loss improved from 0.66869 to 0.58974, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0340 - mse: 0.0340 - mae: 0.1484 - val_loss: 0.5897 - val_mse: 0.5897 - val_mae: 0.7351 - lr: 0.0010 - 62ms/epoch - 6ms/step
Epoch 4/500

Epoch 00004: val_loss improved from 0.58974 to 0.42411, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0324 - mse: 0.0324 - mae: 0.1428 - val_loss: 0.4241 - val_mse: 0.4241 - val_mae: 0.6198 - lr: 0.0010 - 60ms/epoch - 6ms/step
Epoch 5/500

Epoch 00005: val_loss improved from 0.42411 to 0.33734, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0242 - mse: 0.0242 - mae: 0.1266 - val_loss: 0.3373 - val_mse: 0.3373 - val_mae: 0.5500 - lr: 0.0010 - 65ms/epoch - 7ms/step
Epoch 6/500

Epoch 00006: val_loss improved from 0.33734 to 0.30873, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0236 - mse: 0.0236 - mae: 0.1251 - val_loss: 0.3087 - val_mse: 0.3087 - val_mae: 0.5256 - lr: 0.0010 - 62ms/epoch - 6ms/step
Epoch 7/500

Epoch 00007: val_loss improved from 0.30873 to 0.25836, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0202 - mse: 0.0202 - mae: 0.1135 - val_loss: 0.2584 - val_mse: 0.2584 - val_mae: 0.4789 - lr: 0.0010 - 62ms/epoch - 6ms/step
Epoch 8/500

Epoch 00008: val_loss improved from 0.25836 to 0.25317, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0191 - mse: 0.0191 - mae: 0.1097 - val_loss: 0.2532 - val_mse: 0.2532 - val_mae: 0.4741 - lr: 0.0010 - 78ms/epoch - 8ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.25317
10/10 - 0s - loss: 0.0155 - mse: 0.0155 - mae: 0.0986 - val_loss: 0.2580 - val_mse: 0.2580 - val_mae: 0.4791 - lr: 0.0010 - 62ms/epoch - 6ms/step
Epoch 10/500

Epoch 00010: val_loss improved from 0.25317 to 0.24856, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0160 - mse: 0.0160 - mae: 0.0999 - val_loss: 0.2486 - val_mse: 0.2486 - val_mae: 0.4699 - lr: 0.0010 - 73ms/epoch - 7ms/step
Epoch 11/500

Epoch 00011: val_loss improved from 0.24856 to 0.24088, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0123 - mse: 0.0123 - mae: 0.0883 - val_loss: 0.2409 - val_mse: 0.2409 - val_mae: 0.4626 - lr: 0.0010 - 72ms/epoch - 7ms/step
Epoch 12/500

Epoch 00012: val_loss improved from 0.24088 to 0.21981, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0119 - mse: 0.0119 - mae: 0.0874 - val_loss: 0.2198 - val_mse: 0.2198 - val_mae: 0.4406 - lr: 0.0010 - 74ms/epoch - 7ms/step
Epoch 13/500

Epoch 00013: val_loss improved from 0.21981 to 0.20760, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0119 - mse: 0.0119 - mae: 0.0871 - val_loss: 0.2076 - val_mse: 0.2076 - val_mae: 0.4273 - lr: 0.0010 - 64ms/epoch - 6ms/step
Epoch 14/500

Epoch 00014: val_loss improved from 0.20760 to 0.20492, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0115 - mse: 0.0115 - mae: 0.0854 - val_loss: 0.2049 - val_mse: 0.2049 - val_mae: 0.4246 - lr: 0.0010 - 65ms/epoch - 7ms/step
Epoch 15/500

Epoch 00015: val_loss improved from 0.20492 to 0.20166, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0092 - mse: 0.0092 - mae: 0.0765 - val_loss: 0.2017 - val_mse: 0.2017 - val_mae: 0.4214 - lr: 0.0010 - 64ms/epoch - 6ms/step
Epoch 16/500

Epoch 00016: val_loss improved from 0.20166 to 0.19058, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0098 - mse: 0.0098 - mae: 0.0777 - val_loss: 0.1906 - val_mse: 0.1906 - val_mae: 0.4091 - lr: 0.0010 - 67ms/epoch - 7ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.19058
10/10 - 0s - loss: 0.0082 - mse: 0.0082 - mae: 0.0727 - val_loss: 0.1970 - val_mse: 0.1970 - val_mae: 0.4164 - lr: 0.0010 - 51ms/epoch - 5ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.19058
10/10 - 0s - loss: 0.0089 - mse: 0.0089 - mae: 0.0740 - val_loss: 0.2024 - val_mse: 0.2024 - val_mae: 0.4221 - lr: 0.0010 - 52ms/epoch - 5ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.19058
10/10 - 0s - loss: 0.0083 - mse: 0.0083 - mae: 0.0707 - val_loss: 0.2001 - val_mse: 0.2001 - val_mae: 0.4194 - lr: 0.0010 - 56ms/epoch - 6ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.19058
10/10 - 0s - loss: 0.0079 - mse: 0.0079 - mae: 0.0698 - val_loss: 0.2042 - val_mse: 0.2042 - val_mae: 0.4243 - lr: 0.0010 - 50ms/epoch - 5ms/step
Epoch 21/500

Epoch 00021: val_loss improved from 0.19058 to 0.18832, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0076 - mse: 0.0076 - mae: 0.0692 - val_loss: 0.1883 - val_mse: 0.1883 - val_mae: 0.4066 - lr: 0.0010 - 63ms/epoch - 6ms/step
Epoch 22/500

Epoch 00022: val_loss improved from 0.18832 to 0.16796, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0076 - mse: 0.0076 - mae: 0.0674 - val_loss: 0.1680 - val_mse: 0.1680 - val_mae: 0.3824 - lr: 0.0010 - 64ms/epoch - 6ms/step
Epoch 23/500

Epoch 00023: val_loss improved from 0.16796 to 0.16784, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0071 - mse: 0.0071 - mae: 0.0667 - val_loss: 0.1678 - val_mse: 0.1678 - val_mae: 0.3821 - lr: 0.0010 - 74ms/epoch - 7ms/step
Epoch 24/500

Epoch 00024: val_loss improved from 0.16784 to 0.16775, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0622 - val_loss: 0.1678 - val_mse: 0.1678 - val_mae: 0.3819 - lr: 0.0010 - 74ms/epoch - 7ms/step
Epoch 25/500

Epoch 00025: val_loss improved from 0.16775 to 0.15586, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0067 - mse: 0.0067 - mae: 0.0644 - val_loss: 0.1559 - val_mse: 0.1559 - val_mae: 0.3670 - lr: 0.0010 - 71ms/epoch - 7ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.15586
10/10 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0616 - val_loss: 0.1584 - val_mse: 0.1584 - val_mae: 0.3704 - lr: 0.0010 - 55ms/epoch - 6ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.15586
10/10 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0617 - val_loss: 0.1619 - val_mse: 0.1619 - val_mae: 0.3742 - lr: 0.0010 - 52ms/epoch - 5ms/step
Epoch 28/500

Epoch 00028: val_loss improved from 0.15586 to 0.15265, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0602 - val_loss: 0.1526 - val_mse: 0.1526 - val_mae: 0.3623 - lr: 0.0010 - 68ms/epoch - 7ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.15265
10/10 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0586 - val_loss: 0.1528 - val_mse: 0.1528 - val_mae: 0.3626 - lr: 0.0010 - 53ms/epoch - 5ms/step
Epoch 30/500

Epoch 00030: val_loss improved from 0.15265 to 0.14845, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0573 - val_loss: 0.1484 - val_mse: 0.1484 - val_mae: 0.3568 - lr: 0.0010 - 68ms/epoch - 7ms/step
Epoch 31/500

Epoch 00031: val_loss improved from 0.14845 to 0.13058, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0577 - val_loss: 0.1306 - val_mse: 0.1306 - val_mae: 0.3325 - lr: 0.0010 - 69ms/epoch - 7ms/step
Epoch 32/500

Epoch 00032: val_loss improved from 0.13058 to 0.12945, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0569 - val_loss: 0.1295 - val_mse: 0.1295 - val_mae: 0.3306 - lr: 0.0010 - 67ms/epoch - 7ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.12945
10/10 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0541 - val_loss: 0.1421 - val_mse: 0.1421 - val_mae: 0.3480 - lr: 0.0010 - 53ms/epoch - 5ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.12945
10/10 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0522 - val_loss: 0.1620 - val_mse: 0.1620 - val_mae: 0.3739 - lr: 0.0010 - 51ms/epoch - 5ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.12945
10/10 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0551 - val_loss: 0.1601 - val_mse: 0.1601 - val_mae: 0.3718 - lr: 0.0010 - 57ms/epoch - 6ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.12945
10/10 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0564 - val_loss: 0.1506 - val_mse: 0.1506 - val_mae: 0.3599 - lr: 0.0010 - 55ms/epoch - 5ms/step
Epoch 37/500

Epoch 00037: val_loss improved from 0.12945 to 0.12770, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0531 - val_loss: 0.1277 - val_mse: 0.1277 - val_mae: 0.3292 - lr: 0.0010 - 65ms/epoch - 6ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.12770
10/10 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0518 - val_loss: 0.1278 - val_mse: 0.1278 - val_mae: 0.3294 - lr: 0.0010 - 63ms/epoch - 6ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.12770
10/10 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0500 - val_loss: 0.1368 - val_mse: 0.1368 - val_mae: 0.3418 - lr: 0.0010 - 62ms/epoch - 6ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.12770
10/10 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0542 - val_loss: 0.1507 - val_mse: 0.1507 - val_mae: 0.3599 - lr: 0.0010 - 58ms/epoch - 6ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.12770
10/10 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0523 - val_loss: 0.1399 - val_mse: 0.1399 - val_mae: 0.3455 - lr: 0.0010 - 60ms/epoch - 6ms/step
Epoch 42/500

Epoch 00042: val_loss improved from 0.12770 to 0.12669, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0536 - val_loss: 0.1267 - val_mse: 0.1267 - val_mae: 0.3273 - lr: 0.0010 - 74ms/epoch - 7ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.12669
10/10 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0488 - val_loss: 0.1534 - val_mse: 0.1534 - val_mae: 0.3634 - lr: 0.0010 - 61ms/epoch - 6ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.12669
10/10 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0525 - val_loss: 0.1543 - val_mse: 0.1543 - val_mae: 0.3645 - lr: 0.0010 - 59ms/epoch - 6ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.12669
10/10 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0493 - val_loss: 0.1419 - val_mse: 0.1419 - val_mae: 0.3487 - lr: 0.0010 - 53ms/epoch - 5ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.12669
10/10 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0500 - val_loss: 0.1270 - val_mse: 0.1270 - val_mae: 0.3286 - lr: 0.0010 - 58ms/epoch - 6ms/step
Epoch 47/500

Epoch 00047: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00047: val_loss did not improve from 0.12669
10/10 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0519 - val_loss: 0.1277 - val_mse: 0.1277 - val_mae: 0.3294 - lr: 0.0010 - 53ms/epoch - 5ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.12669
10/10 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0489 - val_loss: 0.1309 - val_mse: 0.1309 - val_mae: 0.3338 - lr: 1.0000e-04 - 51ms/epoch - 5ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.12669
10/10 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0495 - val_loss: 0.1321 - val_mse: 0.1321 - val_mae: 0.3355 - lr: 1.0000e-04 - 51ms/epoch - 5ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.12669
10/10 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0496 - val_loss: 0.1329 - val_mse: 0.1329 - val_mae: 0.3364 - lr: 1.0000e-04 - 55ms/epoch - 6ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.12669
10/10 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0491 - val_loss: 0.1356 - val_mse: 0.1356 - val_mae: 0.3400 - lr: 1.0000e-04 - 52ms/epoch - 5ms/step
Epoch 52/500

Epoch 00052: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00052: val_loss did not improve from 0.12669
10/10 - 0s - loss: 0.0037 - mse: 0.0037 - mae: 0.0484 - val_loss: 0.1380 - val_mse: 0.1380 - val_mae: 0.3433 - lr: 1.0000e-04 - 52ms/epoch - 5ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.12669
10/10 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0493 - val_loss: 0.1381 - val_mse: 0.1381 - val_mae: 0.3435 - lr: 1.0000e-05 - 55ms/epoch - 6ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.12669
10/10 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0491 - val_loss: 0.1383 - val_mse: 0.1383 - val_mae: 0.3437 - lr: 1.0000e-05 - 59ms/epoch - 6ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.12669
10/10 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0464 - val_loss: 0.1383 - val_mse: 0.1383 - val_mae: 0.3438 - lr: 1.0000e-05 - 60ms/epoch - 6ms/step
Epoch 56/500

Epoch 00056: val_loss did not improve from 0.12669
10/10 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0463 - val_loss: 0.1384 - val_mse: 0.1384 - val_mae: 0.3438 - lr: 1.0000e-05 - 59ms/epoch - 6ms/step
Epoch 57/500

Epoch 00057: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00057: val_loss did not improve from 0.12669
10/10 - 0s - loss: 0.0037 - mse: 0.0037 - mae: 0.0485 - val_loss: 0.1384 - val_mse: 0.1384 - val_mae: 0.3438 - lr: 1.0000e-05 - 55ms/epoch - 6ms/step
Epoch 58/500

Epoch 00058: val_loss did not improve from 0.12669
10/10 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0493 - val_loss: 0.1384 - val_mse: 0.1384 - val_mae: 0.3438 - lr: 1.0000e-05 - 57ms/epoch - 6ms/step
Epoch 59/500

Epoch 00059: val_loss did not improve from 0.12669
10/10 - 0s - loss: 0.0037 - mse: 0.0037 - mae: 0.0474 - val_loss: 0.1387 - val_mse: 0.1387 - val_mae: 0.3443 - lr: 1.0000e-05 - 58ms/epoch - 6ms/step
Epoch 60/500

Epoch 00060: val_loss did not improve from 0.12669
10/10 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0479 - val_loss: 0.1390 - val_mse: 0.1390 - val_mae: 0.3447 - lr: 1.0000e-05 - 52ms/epoch - 5ms/step
Epoch 61/500

Epoch 00061: val_loss did not improve from 0.12669
10/10 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0484 - val_loss: 0.1394 - val_mse: 0.1394 - val_mae: 0.3451 - lr: 1.0000e-05 - 53ms/epoch - 5ms/step
Epoch 62/500

Epoch 00062: val_loss did not improve from 0.12669
10/10 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0491 - val_loss: 0.1394 - val_mse: 0.1394 - val_mae: 0.3452 - lr: 1.0000e-05 - 53ms/epoch - 5ms/step
Epoch 63/500

Epoch 00063: val_loss did not improve from 0.12669
10/10 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0486 - val_loss: 0.1392 - val_mse: 0.1392 - val_mae: 0.3450 - lr: 1.0000e-05 - 56ms/epoch - 6ms/step
Epoch 64/500

Epoch 00064: val_loss did not improve from 0.12669
10/10 - 0s - loss: 0.0035 - mse: 0.0035 - mae: 0.0465 - val_loss: 0.1390 - val_mse: 0.1390 - val_mae: 0.3446 - lr: 1.0000e-05 - 57ms/epoch - 6ms/step
Epoch 65/500

Epoch 00065: val_loss did not improve from 0.12669
10/10 - 0s - loss: 0.0035 - mse: 0.0035 - mae: 0.0467 - val_loss: 0.1390 - val_mse: 0.1390 - val_mae: 0.3446 - lr: 1.0000e-05 - 53ms/epoch - 5ms/step
Epoch 66/500

Epoch 00066: val_loss did not improve from 0.12669
10/10 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0472 - val_loss: 0.1392 - val_mse: 0.1392 - val_mae: 0.3449 - lr: 1.0000e-05 - 52ms/epoch - 5ms/step
Epoch 67/500

Epoch 00067: val_loss did not improve from 0.12669
10/10 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0493 - val_loss: 0.1392 - val_mse: 0.1392 - val_mae: 0.3449 - lr: 1.0000e-05 - 55ms/epoch - 6ms/step
Epoch 68/500

Epoch 00068: val_loss did not improve from 0.12669
10/10 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0479 - val_loss: 0.1391 - val_mse: 0.1391 - val_mae: 0.3448 - lr: 1.0000e-05 - 53ms/epoch - 5ms/step
Epoch 69/500

Epoch 00069: val_loss did not improve from 0.12669
10/10 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0492 - val_loss: 0.1392 - val_mse: 0.1392 - val_mae: 0.3449 - lr: 1.0000e-05 - 53ms/epoch - 5ms/step
Epoch 70/500

Epoch 00070: val_loss did not improve from 0.12669
10/10 - 0s - loss: 0.0037 - mse: 0.0037 - mae: 0.0479 - val_loss: 0.1394 - val_mse: 0.1394 - val_mae: 0.3451 - lr: 1.0000e-05 - 55ms/epoch - 5ms/step
Epoch 71/500

Epoch 00071: val_loss did not improve from 0.12669
10/10 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0479 - val_loss: 0.1394 - val_mse: 0.1394 - val_mae: 0.3452 - lr: 1.0000e-05 - 67ms/epoch - 7ms/step
Epoch 72/500

Epoch 00072: val_loss did not improve from 0.12669
10/10 - 0s - loss: 0.0037 - mse: 0.0037 - mae: 0.0469 - val_loss: 0.1395 - val_mse: 0.1395 - val_mae: 0.3453 - lr: 1.0000e-05 - 58ms/epoch - 6ms/step
Epoch 73/500

Epoch 00073: val_loss did not improve from 0.12669
10/10 - 0s - loss: 0.0037 - mse: 0.0037 - mae: 0.0470 - val_loss: 0.1395 - val_mse: 0.1395 - val_mae: 0.3453 - lr: 1.0000e-05 - 57ms/epoch - 6ms/step
Epoch 74/500

Epoch 00074: val_loss did not improve from 0.12669
10/10 - 0s - loss: 0.0037 - mse: 0.0037 - mae: 0.0470 - val_loss: 0.1393 - val_mse: 0.1393 - val_mae: 0.3451 - lr: 1.0000e-05 - 53ms/epoch - 5ms/step
Epoch 75/500

Epoch 00075: val_loss did not improve from 0.12669
10/10 - 0s - loss: 0.0037 - mse: 0.0037 - mae: 0.0480 - val_loss: 0.1395 - val_mse: 0.1395 - val_mae: 0.3453 - lr: 1.0000e-05 - 56ms/epoch - 6ms/step
Epoch 76/500

Epoch 00076: val_loss did not improve from 0.12669
10/10 - 0s - loss: 0.0037 - mse: 0.0037 - mae: 0.0475 - val_loss: 0.1395 - val_mse: 0.1395 - val_mae: 0.3453 - lr: 1.0000e-05 - 62ms/epoch - 6ms/step
Epoch 77/500

Epoch 00077: val_loss did not improve from 0.12669
10/10 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0474 - val_loss: 0.1398 - val_mse: 0.1398 - val_mae: 0.3457 - lr: 1.0000e-05 - 51ms/epoch - 5ms/step
Epoch 78/500

Epoch 00078: val_loss did not improve from 0.12669
10/10 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0484 - val_loss: 0.1400 - val_mse: 0.1400 - val_mae: 0.3460 - lr: 1.0000e-05 - 53ms/epoch - 5ms/step
Epoch 79/500

Epoch 00079: val_loss did not improve from 0.12669
10/10 - 0s - loss: 0.0037 - mse: 0.0037 - mae: 0.0474 - val_loss: 0.1400 - val_mse: 0.1400 - val_mae: 0.3460 - lr: 1.0000e-05 - 53ms/epoch - 5ms/step
Epoch 80/500

Epoch 00080: val_loss did not improve from 0.12669
10/10 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0468 - val_loss: 0.1399 - val_mse: 0.1399 - val_mae: 0.3459 - lr: 1.0000e-05 - 57ms/epoch - 6ms/step
Epoch 81/500

Epoch 00081: val_loss did not improve from 0.12669
10/10 - 0s - loss: 0.0035 - mse: 0.0035 - mae: 0.0479 - val_loss: 0.1400 - val_mse: 0.1400 - val_mae: 0.3460 - lr: 1.0000e-05 - 51ms/epoch - 5ms/step
Epoch 82/500

Epoch 00082: val_loss did not improve from 0.12669
10/10 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0489 - val_loss: 0.1399 - val_mse: 0.1399 - val_mae: 0.3459 - lr: 1.0000e-05 - 52ms/epoch - 5ms/step
Epoch 83/500

Epoch 00083: val_loss did not improve from 0.12669
10/10 - 0s - loss: 0.0035 - mse: 0.0035 - mae: 0.0472 - val_loss: 0.1396 - val_mse: 0.1396 - val_mae: 0.3454 - lr: 1.0000e-05 - 55ms/epoch - 5ms/step
Epoch 84/500

Epoch 00084: val_loss did not improve from 0.12669
10/10 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0496 - val_loss: 0.1394 - val_mse: 0.1394 - val_mae: 0.3451 - lr: 1.0000e-05 - 52ms/epoch - 5ms/step
Epoch 85/500

Epoch 00085: val_loss did not improve from 0.12669
10/10 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0475 - val_loss: 0.1394 - val_mse: 0.1394 - val_mae: 0.3452 - lr: 1.0000e-05 - 53ms/epoch - 5ms/step
Epoch 86/500

Epoch 00086: val_loss did not improve from 0.12669
10/10 - 0s - loss: 0.0035 - mse: 0.0035 - mae: 0.0470 - val_loss: 0.1397 - val_mse: 0.1397 - val_mae: 0.3455 - lr: 1.0000e-05 - 56ms/epoch - 6ms/step
Epoch 87/500

Epoch 00087: val_loss did not improve from 0.12669
10/10 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0471 - val_loss: 0.1397 - val_mse: 0.1397 - val_mae: 0.3456 - lr: 1.0000e-05 - 55ms/epoch - 6ms/step
Epoch 88/500

Epoch 00088: val_loss did not improve from 0.12669
10/10 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0484 - val_loss: 0.1396 - val_mse: 0.1396 - val_mae: 0.3454 - lr: 1.0000e-05 - 66ms/epoch - 7ms/step
Epoch 89/500

Epoch 00089: val_loss did not improve from 0.12669
10/10 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0482 - val_loss: 0.1394 - val_mse: 0.1394 - val_mae: 0.3452 - lr: 1.0000e-05 - 58ms/epoch - 6ms/step
Epoch 90/500

Epoch 00090: val_loss did not improve from 0.12669
10/10 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0506 - val_loss: 0.1396 - val_mse: 0.1396 - val_mae: 0.3454 - lr: 1.0000e-05 - 56ms/epoch - 6ms/step
Epoch 91/500

Epoch 00091: val_loss did not improve from 0.12669
10/10 - 0s - loss: 0.0035 - mse: 0.0035 - mae: 0.0466 - val_loss: 0.1399 - val_mse: 0.1399 - val_mae: 0.3458 - lr: 1.0000e-05 - 55ms/epoch - 5ms/step
Epoch 92/500

Epoch 00092: val_loss did not improve from 0.12669
10/10 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0471 - val_loss: 0.1402 - val_mse: 0.1402 - val_mae: 0.3462 - lr: 1.0000e-05 - 58ms/epoch - 6ms/step
Epoch 00092: early stopping
SMA
Prediction vs Close:		51.87% Accuracy
Prediction vs Prediction:	48.88% Accuracy
MSE:	 37.148636819682245 
RMSE:	 6.094968155756209 
MAPE:	 5.090179331518223

EMA
Prediction vs Close:		55.22% Accuracy
Prediction vs Prediction:	47.39% Accuracy
MSE:	 31.582654654902484 
RMSE:	 5.619844718041815 
MAPE:	 4.507182634072088

WMA
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	48.88% Accuracy
MSE:	 65.00101872981564 
RMSE:	 8.062320926992156 
MAPE:	 6.705711592581163

DEMA
Prediction vs Close:		52.24% Accuracy
Prediction vs Prediction:	45.52% Accuracy
MSE:	 35.269002386685244 
RMSE:	 5.938771117553298 
MAPE:	 4.62878838931535
KAMA
KAMA([input_arrays], [timeperiod=30])

Kaufman Adaptive Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
18

Working on KAMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-17005.902, Time=2.34 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-14574.593, Time=3.99 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-16796.316, Time=8.11 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-14572.593, Time=4.91 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=-17004.193, Time=2.42 sec
 ARIMA(1,3,2)(0,0,0)[0]             : AIC=-15176.063, Time=10.20 sec
 ARIMA(0,3,2)(0,0,0)[0]             : AIC=-16873.638, Time=10.61 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=-17008.756, Time=2.40 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=-17004.764, Time=2.82 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=-15723.849, Time=13.26 sec
 ARIMA(2,3,0)(0,0,0)[0] intercept   : AIC=-17006.756, Time=2.38 sec

Best model:  ARIMA(2,3,0)(0,0,0)[0]          
Total fit time: 63.483 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(2, 3, 0)   Log Likelihood                8528.378
Date:                Sun, 12 Dec 2021   AIC                         -17008.756
Time:                        15:24:53   BIC                         -16896.176
Sample:                             0   HQIC                        -16965.520
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1          -2.24e-15   7.41e-26  -3.02e+10      0.000   -2.24e-15   -2.24e-15
x2          8.461e-16    6.6e-26   1.28e+10      0.000    8.46e-16    8.46e-16
x3          4.901e-16   6.89e-26   7.11e+09      0.000     4.9e-16     4.9e-16
x4             1.0000   6.96e-26   1.44e+25      0.000       1.000       1.000
x5          5.931e-15   6.61e-26   8.97e+10      0.000    5.93e-15    5.93e-15
x6          -1.05e-15    1.5e-25     -7e+09      0.000   -1.05e-15   -1.05e-15
x7          1.439e-15   6.87e-26    2.1e+10      0.000    1.44e-15    1.44e-15
x8          -1.25e-15    6.8e-26  -1.84e+10      0.000   -1.25e-15   -1.25e-15
x9         -9.356e-17   8.97e-27  -1.04e+10      0.000   -9.36e-17   -9.36e-17
x10        -1.145e-16   2.88e-26  -3.98e+09      0.000   -1.15e-16   -1.15e-16
x11        -2.036e-16    6.8e-26     -3e+09      0.000   -2.04e-16   -2.04e-16
x12         5.951e-16   6.76e-26   8.81e+09      0.000    5.95e-16    5.95e-16
x13        -6.117e-17   6.94e-26  -8.81e+08      0.000   -6.12e-17   -6.12e-17
x14         1.167e-15   1.99e-25   5.85e+09      0.000    1.17e-15    1.17e-15
x15        -4.274e-14   6.99e-26  -6.11e+11      0.000   -4.27e-14   -4.27e-14
x16         2.262e-14   8.56e-26   2.64e+11      0.000    2.26e-14    2.26e-14
x17         3.384e-14   6.46e-26   5.24e+11      0.000    3.38e-14    3.38e-14
x18         9.894e-16    5.8e-26   1.71e+10      0.000    9.89e-16    9.89e-16
x19         4.115e-14   7.75e-26   5.31e+11      0.000    4.12e-14    4.12e-14
x20        -2.176e-15   9.49e-26  -2.29e+10      0.000   -2.18e-15   -2.18e-15
x21        -7.755e-17   4.63e-26  -1.67e+09      0.000   -7.75e-17   -7.75e-17
ar.L1         -0.9988   9.76e-22  -1.02e+21      0.000      -0.999      -0.999
ar.L2         -0.4972   4.07e-23  -1.22e+22      0.000      -0.497      -0.497
sigma2          1e-10   6.99e-11      1.432      0.152   -3.69e-11    2.37e-10
===================================================================================
Ljung-Box (L1) (Q):                  31.54   Jarque-Bera (JB):           2432532.03
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                            -0.15
Prob(H) (two-sided):                  0.00   Kurtosis:                       272.30
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 7.19e+48. Standard errors may be unstable.
ARIMA order: (2, 3, 0) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.01543, saving model to LSTM7.h5
45/45 - 3s - loss: 0.0923 - mse: 0.0923 - mae: 0.2595 - val_loss: 0.0154 - val_mse: 0.0154 - val_mae: 0.1058 - lr: 0.0010 - 3s/epoch - 57ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.01543
45/45 - 0s - loss: 0.0162 - mse: 0.0162 - mae: 0.1027 - val_loss: 0.0382 - val_mse: 0.0382 - val_mae: 0.1807 - lr: 0.0010 - 198ms/epoch - 4ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.01543
45/45 - 0s - loss: 0.0192 - mse: 0.0192 - mae: 0.1081 - val_loss: 0.0306 - val_mse: 0.0306 - val_mae: 0.1598 - lr: 0.0010 - 172ms/epoch - 4ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.01543
45/45 - 0s - loss: 0.0143 - mse: 0.0143 - mae: 0.0924 - val_loss: 0.0376 - val_mse: 0.0376 - val_mae: 0.1802 - lr: 0.0010 - 173ms/epoch - 4ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.01543
45/45 - 0s - loss: 0.0151 - mse: 0.0151 - mae: 0.0952 - val_loss: 0.0269 - val_mse: 0.0269 - val_mae: 0.1482 - lr: 0.0010 - 179ms/epoch - 4ms/step
Epoch 6/500

Epoch 00006: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00006: val_loss did not improve from 0.01543
45/45 - 0s - loss: 0.0145 - mse: 0.0145 - mae: 0.0929 - val_loss: 0.0243 - val_mse: 0.0243 - val_mae: 0.1400 - lr: 0.0010 - 175ms/epoch - 4ms/step
Epoch 7/500

Epoch 00007: val_loss improved from 0.01543 to 0.00857, saving model to LSTM7.h5
45/45 - 0s - loss: 0.0341 - mse: 0.0341 - mae: 0.1510 - val_loss: 0.0086 - val_mse: 0.0086 - val_mae: 0.0754 - lr: 1.0000e-04 - 198ms/epoch - 4ms/step
Epoch 8/500

Epoch 00008: val_loss improved from 0.00857 to 0.00771, saving model to LSTM7.h5
45/45 - 0s - loss: 0.0104 - mse: 0.0104 - mae: 0.0835 - val_loss: 0.0077 - val_mse: 0.0077 - val_mae: 0.0711 - lr: 1.0000e-04 - 194ms/epoch - 4ms/step
Epoch 9/500

Epoch 00009: val_loss improved from 0.00771 to 0.00749, saving model to LSTM7.h5
45/45 - 0s - loss: 0.0093 - mse: 0.0093 - mae: 0.0781 - val_loss: 0.0075 - val_mse: 0.0075 - val_mae: 0.0701 - lr: 1.0000e-04 - 180ms/epoch - 4ms/step
Epoch 10/500

Epoch 00010: val_loss improved from 0.00749 to 0.00737, saving model to LSTM7.h5
45/45 - 0s - loss: 0.0087 - mse: 0.0087 - mae: 0.0755 - val_loss: 0.0074 - val_mse: 0.0074 - val_mae: 0.0695 - lr: 1.0000e-04 - 209ms/epoch - 5ms/step
Epoch 11/500

Epoch 00011: val_loss improved from 0.00737 to 0.00729, saving model to LSTM7.h5
45/45 - 0s - loss: 0.0087 - mse: 0.0087 - mae: 0.0747 - val_loss: 0.0073 - val_mse: 0.0073 - val_mae: 0.0691 - lr: 1.0000e-04 - 179ms/epoch - 4ms/step
Epoch 12/500

Epoch 00012: val_loss improved from 0.00729 to 0.00724, saving model to LSTM7.h5
45/45 - 0s - loss: 0.0086 - mse: 0.0086 - mae: 0.0747 - val_loss: 0.0072 - val_mse: 0.0072 - val_mae: 0.0688 - lr: 1.0000e-04 - 211ms/epoch - 5ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.00724
45/45 - 0s - loss: 0.0079 - mse: 0.0079 - mae: 0.0719 - val_loss: 0.0075 - val_mse: 0.0075 - val_mae: 0.0700 - lr: 1.0000e-04 - 183ms/epoch - 4ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.00724
45/45 - 0s - loss: 0.0080 - mse: 0.0080 - mae: 0.0700 - val_loss: 0.0077 - val_mse: 0.0077 - val_mae: 0.0716 - lr: 1.0000e-04 - 177ms/epoch - 4ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.00724
45/45 - 0s - loss: 0.0085 - mse: 0.0085 - mae: 0.0741 - val_loss: 0.0081 - val_mse: 0.0081 - val_mae: 0.0735 - lr: 1.0000e-04 - 171ms/epoch - 4ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.00724
45/45 - 0s - loss: 0.0075 - mse: 0.0075 - mae: 0.0706 - val_loss: 0.0085 - val_mse: 0.0085 - val_mae: 0.0759 - lr: 1.0000e-04 - 176ms/epoch - 4ms/step
Epoch 17/500

Epoch 00017: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00017: val_loss did not improve from 0.00724
45/45 - 0s - loss: 0.0076 - mse: 0.0076 - mae: 0.0706 - val_loss: 0.0081 - val_mse: 0.0081 - val_mae: 0.0738 - lr: 1.0000e-04 - 183ms/epoch - 4ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.00724
45/45 - 0s - loss: 0.0067 - mse: 0.0067 - mae: 0.0635 - val_loss: 0.0080 - val_mse: 0.0080 - val_mae: 0.0733 - lr: 1.0000e-05 - 187ms/epoch - 4ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.00724
45/45 - 0s - loss: 0.0068 - mse: 0.0068 - mae: 0.0659 - val_loss: 0.0080 - val_mse: 0.0080 - val_mae: 0.0732 - lr: 1.0000e-05 - 168ms/epoch - 4ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.00724
45/45 - 0s - loss: 0.0066 - mse: 0.0066 - mae: 0.0646 - val_loss: 0.0080 - val_mse: 0.0080 - val_mae: 0.0733 - lr: 1.0000e-05 - 170ms/epoch - 4ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.00724
45/45 - 0s - loss: 0.0066 - mse: 0.0066 - mae: 0.0638 - val_loss: 0.0080 - val_mse: 0.0080 - val_mae: 0.0732 - lr: 1.0000e-05 - 182ms/epoch - 4ms/step
Epoch 22/500

Epoch 00022: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00022: val_loss did not improve from 0.00724
45/45 - 0s - loss: 0.0071 - mse: 0.0071 - mae: 0.0671 - val_loss: 0.0079 - val_mse: 0.0079 - val_mae: 0.0728 - lr: 1.0000e-05 - 169ms/epoch - 4ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.00724
45/45 - 0s - loss: 0.0067 - mse: 0.0067 - mae: 0.0653 - val_loss: 0.0079 - val_mse: 0.0079 - val_mae: 0.0727 - lr: 1.0000e-05 - 198ms/epoch - 4ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.00724
45/45 - 0s - loss: 0.0069 - mse: 0.0069 - mae: 0.0661 - val_loss: 0.0078 - val_mse: 0.0078 - val_mae: 0.0724 - lr: 1.0000e-05 - 179ms/epoch - 4ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.00724
45/45 - 0s - loss: 0.0071 - mse: 0.0071 - mae: 0.0658 - val_loss: 0.0077 - val_mse: 0.0077 - val_mae: 0.0721 - lr: 1.0000e-05 - 177ms/epoch - 4ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.00724
45/45 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0642 - val_loss: 0.0077 - val_mse: 0.0077 - val_mae: 0.0719 - lr: 1.0000e-05 - 177ms/epoch - 4ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.00724
45/45 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0641 - val_loss: 0.0077 - val_mse: 0.0077 - val_mae: 0.0716 - lr: 1.0000e-05 - 167ms/epoch - 4ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.00724
45/45 - 0s - loss: 0.0069 - mse: 0.0069 - mae: 0.0650 - val_loss: 0.0076 - val_mse: 0.0076 - val_mae: 0.0710 - lr: 1.0000e-05 - 182ms/epoch - 4ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.00724
45/45 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0639 - val_loss: 0.0076 - val_mse: 0.0076 - val_mae: 0.0710 - lr: 1.0000e-05 - 181ms/epoch - 4ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.00724
45/45 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0630 - val_loss: 0.0076 - val_mse: 0.0076 - val_mae: 0.0714 - lr: 1.0000e-05 - 169ms/epoch - 4ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.00724
45/45 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0588 - val_loss: 0.0076 - val_mse: 0.0076 - val_mae: 0.0714 - lr: 1.0000e-05 - 167ms/epoch - 4ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.00724
45/45 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0623 - val_loss: 0.0076 - val_mse: 0.0076 - val_mae: 0.0714 - lr: 1.0000e-05 - 179ms/epoch - 4ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.00724
45/45 - 0s - loss: 0.0066 - mse: 0.0066 - mae: 0.0641 - val_loss: 0.0076 - val_mse: 0.0076 - val_mae: 0.0715 - lr: 1.0000e-05 - 167ms/epoch - 4ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.00724
45/45 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0623 - val_loss: 0.0077 - val_mse: 0.0077 - val_mae: 0.0717 - lr: 1.0000e-05 - 194ms/epoch - 4ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.00724
45/45 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0618 - val_loss: 0.0076 - val_mse: 0.0076 - val_mae: 0.0712 - lr: 1.0000e-05 - 185ms/epoch - 4ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.00724
45/45 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0637 - val_loss: 0.0076 - val_mse: 0.0076 - val_mae: 0.0712 - lr: 1.0000e-05 - 173ms/epoch - 4ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.00724
45/45 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0610 - val_loss: 0.0075 - val_mse: 0.0075 - val_mae: 0.0710 - lr: 1.0000e-05 - 176ms/epoch - 4ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.00724
45/45 - 0s - loss: 0.0068 - mse: 0.0068 - mae: 0.0653 - val_loss: 0.0076 - val_mse: 0.0076 - val_mae: 0.0714 - lr: 1.0000e-05 - 174ms/epoch - 4ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.00724
45/45 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0635 - val_loss: 0.0076 - val_mse: 0.0076 - val_mae: 0.0715 - lr: 1.0000e-05 - 179ms/epoch - 4ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.00724
45/45 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0611 - val_loss: 0.0076 - val_mse: 0.0076 - val_mae: 0.0713 - lr: 1.0000e-05 - 226ms/epoch - 5ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.00724
45/45 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0642 - val_loss: 0.0076 - val_mse: 0.0076 - val_mae: 0.0714 - lr: 1.0000e-05 - 168ms/epoch - 4ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.00724
45/45 - 0s - loss: 0.0066 - mse: 0.0066 - mae: 0.0643 - val_loss: 0.0076 - val_mse: 0.0076 - val_mae: 0.0713 - lr: 1.0000e-05 - 169ms/epoch - 4ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.00724
45/45 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0604 - val_loss: 0.0076 - val_mse: 0.0076 - val_mae: 0.0711 - lr: 1.0000e-05 - 167ms/epoch - 4ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.00724
45/45 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0605 - val_loss: 0.0076 - val_mse: 0.0076 - val_mae: 0.0712 - lr: 1.0000e-05 - 184ms/epoch - 4ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.00724
45/45 - 0s - loss: 0.0068 - mse: 0.0068 - mae: 0.0650 - val_loss: 0.0075 - val_mse: 0.0075 - val_mae: 0.0708 - lr: 1.0000e-05 - 193ms/epoch - 4ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.00724
45/45 - 0s - loss: 0.0067 - mse: 0.0067 - mae: 0.0652 - val_loss: 0.0075 - val_mse: 0.0075 - val_mae: 0.0710 - lr: 1.0000e-05 - 181ms/epoch - 4ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.00724
45/45 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0620 - val_loss: 0.0076 - val_mse: 0.0076 - val_mae: 0.0711 - lr: 1.0000e-05 - 173ms/epoch - 4ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.00724
45/45 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0603 - val_loss: 0.0075 - val_mse: 0.0075 - val_mae: 0.0711 - lr: 1.0000e-05 - 166ms/epoch - 4ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.00724
45/45 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0632 - val_loss: 0.0075 - val_mse: 0.0075 - val_mae: 0.0709 - lr: 1.0000e-05 - 184ms/epoch - 4ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.00724
45/45 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0631 - val_loss: 0.0075 - val_mse: 0.0075 - val_mae: 0.0709 - lr: 1.0000e-05 - 190ms/epoch - 4ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.00724
45/45 - 0s - loss: 0.0066 - mse: 0.0066 - mae: 0.0656 - val_loss: 0.0076 - val_mse: 0.0076 - val_mae: 0.0712 - lr: 1.0000e-05 - 175ms/epoch - 4ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.00724
45/45 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0630 - val_loss: 0.0076 - val_mse: 0.0076 - val_mae: 0.0713 - lr: 1.0000e-05 - 170ms/epoch - 4ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.00724
45/45 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0609 - val_loss: 0.0077 - val_mse: 0.0077 - val_mae: 0.0717 - lr: 1.0000e-05 - 165ms/epoch - 4ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.00724
45/45 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0615 - val_loss: 0.0077 - val_mse: 0.0077 - val_mae: 0.0719 - lr: 1.0000e-05 - 181ms/epoch - 4ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.00724
45/45 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0610 - val_loss: 0.0077 - val_mse: 0.0077 - val_mae: 0.0719 - lr: 1.0000e-05 - 170ms/epoch - 4ms/step
Epoch 56/500

Epoch 00056: val_loss did not improve from 0.00724
45/45 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0595 - val_loss: 0.0077 - val_mse: 0.0077 - val_mae: 0.0718 - lr: 1.0000e-05 - 187ms/epoch - 4ms/step
Epoch 57/500

Epoch 00057: val_loss did not improve from 0.00724
45/45 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0637 - val_loss: 0.0075 - val_mse: 0.0075 - val_mae: 0.0711 - lr: 1.0000e-05 - 179ms/epoch - 4ms/step
Epoch 58/500

Epoch 00058: val_loss did not improve from 0.00724
45/45 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0600 - val_loss: 0.0077 - val_mse: 0.0077 - val_mae: 0.0720 - lr: 1.0000e-05 - 165ms/epoch - 4ms/step
Epoch 59/500

Epoch 00059: val_loss did not improve from 0.00724
45/45 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0618 - val_loss: 0.0077 - val_mse: 0.0077 - val_mae: 0.0723 - lr: 1.0000e-05 - 174ms/epoch - 4ms/step
Epoch 60/500

Epoch 00060: val_loss did not improve from 0.00724
45/45 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0622 - val_loss: 0.0077 - val_mse: 0.0077 - val_mae: 0.0721 - lr: 1.0000e-05 - 174ms/epoch - 4ms/step
Epoch 61/500

Epoch 00061: val_loss did not improve from 0.00724
45/45 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0580 - val_loss: 0.0079 - val_mse: 0.0079 - val_mae: 0.0732 - lr: 1.0000e-05 - 194ms/epoch - 4ms/step
Epoch 62/500

Epoch 00062: val_loss did not improve from 0.00724
45/45 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0606 - val_loss: 0.0077 - val_mse: 0.0077 - val_mae: 0.0721 - lr: 1.0000e-05 - 174ms/epoch - 4ms/step
Epoch 00062: early stopping
SMA
Prediction vs Close:		51.87% Accuracy
Prediction vs Prediction:	48.88% Accuracy
MSE:	 37.148636819682245 
RMSE:	 6.094968155756209 
MAPE:	 5.090179331518223

EMA
Prediction vs Close:		55.22% Accuracy
Prediction vs Prediction:	47.39% Accuracy
MSE:	 31.582654654902484 
RMSE:	 5.619844718041815 
MAPE:	 4.507182634072088

WMA
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	48.88% Accuracy
MSE:	 65.00101872981564 
RMSE:	 8.062320926992156 
MAPE:	 6.705711592581163

DEMA
Prediction vs Close:		52.24% Accuracy
Prediction vs Prediction:	45.52% Accuracy
MSE:	 35.269002386685244 
RMSE:	 5.938771117553298 
MAPE:	 4.62878838931535

KAMA
Prediction vs Close:		54.1% Accuracy
Prediction vs Prediction:	48.88% Accuracy
MSE:	 62.25156682504816 
RMSE:	 7.8899662119078915 
MAPE:	 6.222956717810362
MIDPOINT
MIDPOINT([input_arrays], [timeperiod=14])

MidPoint over period (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 14
Outputs:
    real
14

Working on MIDPOINT predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-17005.753, Time=2.21 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-14574.592, Time=3.96 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-16288.639, Time=11.12 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-14572.592, Time=5.33 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=-15275.254, Time=7.23 sec
 ARIMA(1,3,2)(0,0,0)[0]             : AIC=-15486.751, Time=12.59 sec
 ARIMA(0,3,2)(0,0,0)[0]             : AIC=48.000, Time=0.49 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=-17008.491, Time=2.60 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=-17004.554, Time=3.16 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=-17087.445, Time=6.13 sec
 ARIMA(3,3,2)(0,0,0)[0]             : AIC=-15686.421, Time=9.69 sec
 ARIMA(2,3,2)(0,0,0)[0]             : AIC=-17030.168, Time=15.39 sec
 ARIMA(3,3,1)(0,0,0)[0] intercept   : AIC=-15138.715, Time=14.21 sec

Best model:  ARIMA(3,3,1)(0,0,0)[0]          
Total fit time: 94.129 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 1)   Log Likelihood                8569.722
Date:                Sun, 12 Dec 2021   AIC                         -17087.445
Time:                        15:27:32   BIC                         -16965.483
Sample:                             0   HQIC                        -17040.607
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1          -2.14e-10   1.09e-20  -1.96e+10      0.000   -2.14e-10   -2.14e-10
x2         -2.126e-10   1.13e-20  -1.88e+10      0.000   -2.13e-10   -2.13e-10
x3         -2.175e-10   1.06e-20  -2.04e+10      0.000   -2.17e-10   -2.17e-10
x4             1.0000    1.1e-20   9.11e+19      0.000       1.000       1.000
x5         -1.941e-10   1.05e-20  -1.86e+10      0.000   -1.94e-10   -1.94e-10
x6         -4.131e-09   7.64e-20   -5.4e+10      0.000   -4.13e-09   -4.13e-09
x7         -1.965e-10   1.05e-20  -1.86e+10      0.000   -1.96e-10   -1.96e-10
x8         -1.961e-10   1.07e-20  -1.84e+10      0.000   -1.96e-10   -1.96e-10
x9         -1.005e-10   9.12e-22   -1.1e+11      0.000      -1e-10      -1e-10
x10        -1.739e-10   3.37e-21  -5.16e+10      0.000   -1.74e-10   -1.74e-10
x11        -1.941e-10   1.07e-20  -1.82e+10      0.000   -1.94e-10   -1.94e-10
x12        -2.005e-10   1.06e-20  -1.89e+10      0.000      -2e-10      -2e-10
x13        -2.056e-10   1.07e-20  -1.91e+10      0.000   -2.06e-10   -2.06e-10
x14        -1.687e-09   3.15e-20  -5.36e+10      0.000   -1.69e-09   -1.69e-09
x15        -2.365e-10   1.17e-20  -2.01e+10      0.000   -2.36e-10   -2.36e-10
x16        -1.523e-10   9.42e-21  -1.62e+10      0.000   -1.52e-10   -1.52e-10
x17        -1.491e-10   9.33e-21   -1.6e+10      0.000   -1.49e-10   -1.49e-10
x18        -6.404e-10   1.93e-20  -3.32e+10      0.000    -6.4e-10    -6.4e-10
x19        -2.596e-10   1.23e-20  -2.11e+10      0.000    -2.6e-10    -2.6e-10
x20        -6.246e-10   1.91e-20  -3.28e+10      0.000   -6.25e-10   -6.25e-10
x21        -1.953e-09   2.16e-20  -9.04e+10      0.000   -1.95e-09   -1.95e-09
ar.L1         -0.4914   1.46e-22  -3.38e+21      0.000      -0.491      -0.491
ar.L2         -0.1934   8.48e-23  -2.28e+21      0.000      -0.193      -0.193
ar.L3         -0.0491    4.2e-23  -1.17e+21      0.000      -0.049      -0.049
ma.L1         -0.7092   3.33e-22  -2.13e+21      0.000      -0.709      -0.709
sigma2       8.99e-11   6.95e-11      1.293      0.196   -4.64e-11    2.26e-10
===================================================================================
Ljung-Box (L1) (Q):                  32.51   Jarque-Bera (JB):             49038.38
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.35   Skew:                             1.06
Prob(H) (two-sided):                  0.00   Kurtosis:                        41.18
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 1.71e+40. Standard errors may be unstable.
ARIMA order: (3, 3, 1) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.01564, saving model to LSTM7.h5
58/58 - 2s - loss: 0.1640 - mse: 0.1640 - mae: 0.2979 - val_loss: 0.0156 - val_mse: 0.0156 - val_mae: 0.1028 - lr: 0.0010 - 2s/epoch - 40ms/step
Epoch 2/500

Epoch 00002: val_loss improved from 0.01564 to 0.01327, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0291 - mse: 0.0291 - mae: 0.1345 - val_loss: 0.0133 - val_mse: 0.0133 - val_mae: 0.0936 - lr: 0.0010 - 236ms/epoch - 4ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.01327
58/58 - 0s - loss: 0.0198 - mse: 0.0198 - mae: 0.1098 - val_loss: 0.0155 - val_mse: 0.0155 - val_mae: 0.1041 - lr: 0.0010 - 208ms/epoch - 4ms/step
Epoch 4/500

Epoch 00004: val_loss improved from 0.01327 to 0.01189, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0148 - mse: 0.0148 - mae: 0.0957 - val_loss: 0.0119 - val_mse: 0.0119 - val_mae: 0.0905 - lr: 0.0010 - 235ms/epoch - 4ms/step
Epoch 5/500

Epoch 00005: val_loss improved from 0.01189 to 0.01096, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0123 - mse: 0.0123 - mae: 0.0869 - val_loss: 0.0110 - val_mse: 0.0110 - val_mae: 0.0869 - lr: 0.0010 - 236ms/epoch - 4ms/step
Epoch 6/500

Epoch 00006: val_loss improved from 0.01096 to 0.00941, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0106 - mse: 0.0106 - mae: 0.0801 - val_loss: 0.0094 - val_mse: 0.0094 - val_mae: 0.0801 - lr: 0.0010 - 224ms/epoch - 4ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.00941
58/58 - 0s - loss: 0.0111 - mse: 0.0111 - mae: 0.0821 - val_loss: 0.0098 - val_mse: 0.0098 - val_mae: 0.0820 - lr: 0.0010 - 208ms/epoch - 4ms/step
Epoch 8/500

Epoch 00008: val_loss improved from 0.00941 to 0.00874, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0098 - mse: 0.0098 - mae: 0.0761 - val_loss: 0.0087 - val_mse: 0.0087 - val_mae: 0.0768 - lr: 0.0010 - 239ms/epoch - 4ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.00874
58/58 - 0s - loss: 0.0105 - mse: 0.0105 - mae: 0.0793 - val_loss: 0.0093 - val_mse: 0.0093 - val_mae: 0.0803 - lr: 0.0010 - 228ms/epoch - 4ms/step
Epoch 10/500

Epoch 00010: val_loss improved from 0.00874 to 0.00850, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0103 - mse: 0.0103 - mae: 0.0764 - val_loss: 0.0085 - val_mse: 0.0085 - val_mae: 0.0750 - lr: 0.0010 - 246ms/epoch - 4ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.00850
58/58 - 0s - loss: 0.0089 - mse: 0.0089 - mae: 0.0728 - val_loss: 0.0091 - val_mse: 0.0091 - val_mae: 0.0794 - lr: 0.0010 - 213ms/epoch - 4ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.00850
58/58 - 0s - loss: 0.0097 - mse: 0.0097 - mae: 0.0739 - val_loss: 0.0088 - val_mse: 0.0088 - val_mae: 0.0768 - lr: 0.0010 - 217ms/epoch - 4ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.00850
58/58 - 0s - loss: 0.0080 - mse: 0.0080 - mae: 0.0671 - val_loss: 0.0085 - val_mse: 0.0085 - val_mae: 0.0750 - lr: 0.0010 - 228ms/epoch - 4ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.00850
58/58 - 0s - loss: 0.0109 - mse: 0.0109 - mae: 0.0778 - val_loss: 0.0085 - val_mse: 0.0085 - val_mae: 0.0747 - lr: 0.0010 - 217ms/epoch - 4ms/step
Epoch 15/500

Epoch 00015: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00015: val_loss did not improve from 0.00850
58/58 - 0s - loss: 0.0071 - mse: 0.0071 - mae: 0.0636 - val_loss: 0.0094 - val_mse: 0.0094 - val_mae: 0.0767 - lr: 0.0010 - 232ms/epoch - 4ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.00850
58/58 - 0s - loss: 0.0244 - mse: 0.0244 - mae: 0.1335 - val_loss: 0.0119 - val_mse: 0.0119 - val_mae: 0.0916 - lr: 1.0000e-04 - 221ms/epoch - 4ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.00850
58/58 - 0s - loss: 0.0087 - mse: 0.0087 - mae: 0.0770 - val_loss: 0.0116 - val_mse: 0.0116 - val_mae: 0.0909 - lr: 1.0000e-04 - 209ms/epoch - 4ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.00850
58/58 - 0s - loss: 0.0088 - mse: 0.0088 - mae: 0.0765 - val_loss: 0.0114 - val_mse: 0.0114 - val_mae: 0.0906 - lr: 1.0000e-04 - 227ms/epoch - 4ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.00850
58/58 - 0s - loss: 0.0083 - mse: 0.0083 - mae: 0.0743 - val_loss: 0.0110 - val_mse: 0.0110 - val_mae: 0.0897 - lr: 1.0000e-04 - 212ms/epoch - 4ms/step
Epoch 20/500

Epoch 00020: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00020: val_loss did not improve from 0.00850
58/58 - 0s - loss: 0.0077 - mse: 0.0077 - mae: 0.0714 - val_loss: 0.0110 - val_mse: 0.0110 - val_mae: 0.0899 - lr: 1.0000e-04 - 224ms/epoch - 4ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.00850
58/58 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0629 - val_loss: 0.0111 - val_mse: 0.0111 - val_mae: 0.0903 - lr: 1.0000e-05 - 214ms/epoch - 4ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.00850
58/58 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0632 - val_loss: 0.0112 - val_mse: 0.0112 - val_mae: 0.0909 - lr: 1.0000e-05 - 227ms/epoch - 4ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.00850
58/58 - 0s - loss: 0.0067 - mse: 0.0067 - mae: 0.0661 - val_loss: 0.0114 - val_mse: 0.0114 - val_mae: 0.0914 - lr: 1.0000e-05 - 236ms/epoch - 4ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.00850
58/58 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0635 - val_loss: 0.0114 - val_mse: 0.0114 - val_mae: 0.0916 - lr: 1.0000e-05 - 231ms/epoch - 4ms/step
Epoch 25/500

Epoch 00025: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00025: val_loss did not improve from 0.00850
58/58 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0655 - val_loss: 0.0114 - val_mse: 0.0114 - val_mae: 0.0917 - lr: 1.0000e-05 - 218ms/epoch - 4ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.00850
58/58 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0636 - val_loss: 0.0114 - val_mse: 0.0114 - val_mae: 0.0918 - lr: 1.0000e-05 - 225ms/epoch - 4ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.00850
58/58 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0621 - val_loss: 0.0115 - val_mse: 0.0115 - val_mae: 0.0918 - lr: 1.0000e-05 - 225ms/epoch - 4ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.00850
58/58 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0641 - val_loss: 0.0115 - val_mse: 0.0115 - val_mae: 0.0921 - lr: 1.0000e-05 - 226ms/epoch - 4ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.00850
58/58 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0610 - val_loss: 0.0115 - val_mse: 0.0115 - val_mae: 0.0921 - lr: 1.0000e-05 - 226ms/epoch - 4ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.00850
58/58 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0606 - val_loss: 0.0115 - val_mse: 0.0115 - val_mae: 0.0920 - lr: 1.0000e-05 - 238ms/epoch - 4ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.00850
58/58 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0640 - val_loss: 0.0115 - val_mse: 0.0115 - val_mae: 0.0920 - lr: 1.0000e-05 - 221ms/epoch - 4ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.00850
58/58 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0606 - val_loss: 0.0114 - val_mse: 0.0114 - val_mae: 0.0919 - lr: 1.0000e-05 - 217ms/epoch - 4ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.00850
58/58 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0601 - val_loss: 0.0114 - val_mse: 0.0114 - val_mae: 0.0919 - lr: 1.0000e-05 - 221ms/epoch - 4ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.00850
58/58 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0623 - val_loss: 0.0114 - val_mse: 0.0114 - val_mae: 0.0920 - lr: 1.0000e-05 - 214ms/epoch - 4ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.00850
58/58 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0627 - val_loss: 0.0114 - val_mse: 0.0114 - val_mae: 0.0920 - lr: 1.0000e-05 - 225ms/epoch - 4ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.00850
58/58 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0632 - val_loss: 0.0115 - val_mse: 0.0115 - val_mae: 0.0921 - lr: 1.0000e-05 - 229ms/epoch - 4ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.00850
58/58 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0609 - val_loss: 0.0114 - val_mse: 0.0114 - val_mae: 0.0919 - lr: 1.0000e-05 - 227ms/epoch - 4ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.00850
58/58 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0634 - val_loss: 0.0114 - val_mse: 0.0114 - val_mae: 0.0920 - lr: 1.0000e-05 - 219ms/epoch - 4ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.00850
58/58 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0627 - val_loss: 0.0114 - val_mse: 0.0114 - val_mae: 0.0920 - lr: 1.0000e-05 - 217ms/epoch - 4ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.00850
58/58 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0605 - val_loss: 0.0114 - val_mse: 0.0114 - val_mae: 0.0919 - lr: 1.0000e-05 - 230ms/epoch - 4ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.00850
58/58 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0635 - val_loss: 0.0113 - val_mse: 0.0113 - val_mae: 0.0917 - lr: 1.0000e-05 - 219ms/epoch - 4ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.00850
58/58 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0605 - val_loss: 0.0113 - val_mse: 0.0113 - val_mae: 0.0917 - lr: 1.0000e-05 - 227ms/epoch - 4ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.00850
58/58 - 0s - loss: 0.0069 - mse: 0.0069 - mae: 0.0660 - val_loss: 0.0114 - val_mse: 0.0114 - val_mae: 0.0919 - lr: 1.0000e-05 - 221ms/epoch - 4ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.00850
58/58 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0605 - val_loss: 0.0113 - val_mse: 0.0113 - val_mae: 0.0918 - lr: 1.0000e-05 - 233ms/epoch - 4ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.00850
58/58 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0600 - val_loss: 0.0113 - val_mse: 0.0113 - val_mae: 0.0917 - lr: 1.0000e-05 - 229ms/epoch - 4ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.00850
58/58 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0615 - val_loss: 0.0113 - val_mse: 0.0113 - val_mae: 0.0919 - lr: 1.0000e-05 - 218ms/epoch - 4ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.00850
58/58 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0619 - val_loss: 0.0113 - val_mse: 0.0113 - val_mae: 0.0918 - lr: 1.0000e-05 - 217ms/epoch - 4ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.00850
58/58 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0617 - val_loss: 0.0113 - val_mse: 0.0113 - val_mae: 0.0919 - lr: 1.0000e-05 - 229ms/epoch - 4ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.00850
58/58 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0612 - val_loss: 0.0113 - val_mse: 0.0113 - val_mae: 0.0920 - lr: 1.0000e-05 - 220ms/epoch - 4ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.00850
58/58 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0597 - val_loss: 0.0114 - val_mse: 0.0114 - val_mae: 0.0922 - lr: 1.0000e-05 - 219ms/epoch - 4ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.00850
58/58 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0617 - val_loss: 0.0114 - val_mse: 0.0114 - val_mae: 0.0923 - lr: 1.0000e-05 - 231ms/epoch - 4ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.00850
58/58 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0580 - val_loss: 0.0114 - val_mse: 0.0114 - val_mae: 0.0924 - lr: 1.0000e-05 - 227ms/epoch - 4ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.00850
58/58 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0593 - val_loss: 0.0114 - val_mse: 0.0114 - val_mae: 0.0923 - lr: 1.0000e-05 - 229ms/epoch - 4ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.00850
58/58 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0602 - val_loss: 0.0114 - val_mse: 0.0114 - val_mae: 0.0922 - lr: 1.0000e-05 - 220ms/epoch - 4ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.00850
58/58 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0589 - val_loss: 0.0113 - val_mse: 0.0113 - val_mae: 0.0919 - lr: 1.0000e-05 - 231ms/epoch - 4ms/step
Epoch 56/500

Epoch 00056: val_loss did not improve from 0.00850
58/58 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0579 - val_loss: 0.0113 - val_mse: 0.0113 - val_mae: 0.0920 - lr: 1.0000e-05 - 220ms/epoch - 4ms/step
Epoch 57/500

Epoch 00057: val_loss did not improve from 0.00850
58/58 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0604 - val_loss: 0.0113 - val_mse: 0.0113 - val_mae: 0.0920 - lr: 1.0000e-05 - 227ms/epoch - 4ms/step
Epoch 58/500

Epoch 00058: val_loss did not improve from 0.00850
58/58 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0607 - val_loss: 0.0113 - val_mse: 0.0113 - val_mae: 0.0917 - lr: 1.0000e-05 - 219ms/epoch - 4ms/step
Epoch 59/500

Epoch 00059: val_loss did not improve from 0.00850
58/58 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0617 - val_loss: 0.0112 - val_mse: 0.0112 - val_mae: 0.0914 - lr: 1.0000e-05 - 218ms/epoch - 4ms/step
Epoch 60/500

Epoch 00060: val_loss did not improve from 0.00850
58/58 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0599 - val_loss: 0.0112 - val_mse: 0.0112 - val_mae: 0.0914 - lr: 1.0000e-05 - 222ms/epoch - 4ms/step
Epoch 00060: early stopping
SMA
Prediction vs Close:		51.87% Accuracy
Prediction vs Prediction:	48.88% Accuracy
MSE:	 37.148636819682245 
RMSE:	 6.094968155756209 
MAPE:	 5.090179331518223

EMA
Prediction vs Close:		55.22% Accuracy
Prediction vs Prediction:	47.39% Accuracy
MSE:	 31.582654654902484 
RMSE:	 5.619844718041815 
MAPE:	 4.507182634072088

WMA
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	48.88% Accuracy
MSE:	 65.00101872981564 
RMSE:	 8.062320926992156 
MAPE:	 6.705711592581163

DEMA
Prediction vs Close:		52.24% Accuracy
Prediction vs Prediction:	45.52% Accuracy
MSE:	 35.269002386685244 
RMSE:	 5.938771117553298 
MAPE:	 4.62878838931535

KAMA
Prediction vs Close:		54.1% Accuracy
Prediction vs Prediction:	48.88% Accuracy
MSE:	 62.25156682504816 
RMSE:	 7.8899662119078915 
MAPE:	 6.222956717810362

MIDPOINT
Prediction vs Close:		51.12% Accuracy
Prediction vs Prediction:	42.54% Accuracy
MSE:	 84.47531990876952 
RMSE:	 9.191045637399997 
MAPE:	 7.890202393641488
T3
T3([input_arrays], [timeperiod=5], [vfactor=0.7])

Triple Exponential Moving Average (T3) (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 5
    vfactor: 0.7
Outputs:
    real
19

Working on T3 predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-16569.270, Time=2.35 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-14511.291, Time=2.49 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-15408.738, Time=7.96 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-15165.005, Time=7.92 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=-15595.465, Time=6.94 sec
 ARIMA(1,3,2)(0,0,0)[0]             : AIC=-15837.470, Time=10.15 sec
 ARIMA(0,3,2)(0,0,0)[0]             : AIC=-15491.538, Time=9.59 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=-16378.438, Time=2.50 sec
/usr/local/lib/python3.7/dist-packages/statsmodels/tsa/statespace/sarimax.py:1906: RuntimeWarning: divide by zero encountered in reciprocal
  return np.roots(self.polynomial_reduced_ma)**-1
 ARIMA(2,3,2)(0,0,0)[0]             : AIC=-16318.604, Time=3.52 sec
 ARIMA(1,3,1)(0,0,0)[0] intercept   : AIC=-16567.270, Time=2.23 sec

Best model:  ARIMA(1,3,1)(0,0,0)[0]          
Total fit time: 55.672 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(1, 3, 1)   Log Likelihood                8308.635
Date:                Sun, 12 Dec 2021   AIC                         -16569.270
Time:                        15:36:26   BIC                         -16456.690
Sample:                             0   HQIC                        -16526.035
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1          1.355e-13   3.43e-05   3.95e-09      1.000   -6.72e-05    6.72e-05
x2          5.009e-14   2.67e-05   1.88e-09      1.000   -5.23e-05    5.23e-05
x3         -9.101e-15   2.19e-05  -4.16e-10      1.000   -4.29e-05    4.29e-05
x4             1.0000   2.97e-05   3.37e+04      0.000       1.000       1.000
x5          3.626e-12    3.2e-05   1.13e-07      1.000   -6.28e-05    6.28e-05
x6          6.879e-17      0.000   5.13e-13      1.000      -0.000       0.000
x7          1.588e-13   4.04e-05   3.93e-09      1.000   -7.92e-05    7.92e-05
x8            -0.0002   9.77e-06    -20.395      0.000      -0.000      -0.000
x9          3.877e-14      0.001   6.24e-11      1.000      -0.001       0.001
x10         -7.41e-05      0.001     -0.129      0.897      -0.001       0.001
x11            0.0003   4.91e-05      6.926      0.000       0.000       0.000
x12           -0.0004   7.27e-05     -5.556      0.000      -0.001      -0.000
x13        -2.679e-14   3.39e-05   -7.9e-10      1.000   -6.65e-05    6.65e-05
x14          2.97e-13      0.000   2.31e-09      1.000      -0.000       0.000
x15         1.602e-12   7.47e-05   2.14e-08      1.000      -0.000       0.000
x16        -8.756e-13   4.29e-05  -2.04e-08      1.000   -8.41e-05    8.41e-05
x17         1.793e-12   6.56e-05   2.74e-08      1.000      -0.000       0.000
x18        -1.019e-13      0.000  -5.54e-10      1.000      -0.000       0.000
x19        -1.077e-12   8.29e-05   -1.3e-08      1.000      -0.000       0.000
x20         1.771e-13   8.45e-05    2.1e-09      1.000      -0.000       0.000
x21         9.233e-16      0.000   1.94e-12      1.000      -0.001       0.001
ar.L1         -0.2857      0.000  -2747.572      0.000      -0.286      -0.285
ma.L1         -0.9142   7.12e-06  -1.28e+05      0.000      -0.914      -0.914
sigma2          1e-10      7e-11      1.429      0.153   -3.71e-11    2.37e-10
===================================================================================
Ljung-Box (L1) (Q):                  84.32   Jarque-Bera (JB):           4804295.53
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                             5.87
Prob(H) (two-sided):                  0.00   Kurtosis:                       381.28
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 2.06e+20. Standard errors may be unstable.
ARIMA order: (1, 3, 1) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.06885, saving model to LSTM7.h5
43/43 - 2s - loss: 0.0587 - mse: 0.0587 - mae: 0.1834 - val_loss: 0.0689 - val_mse: 0.0689 - val_mae: 0.1965 - lr: 0.0010 - 2s/epoch - 52ms/step
Epoch 2/500

Epoch 00002: val_loss improved from 0.06885 to 0.05274, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0214 - mse: 0.0214 - mae: 0.1149 - val_loss: 0.0527 - val_mse: 0.0527 - val_mae: 0.1716 - lr: 0.0010 - 171ms/epoch - 4ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.05274
43/43 - 0s - loss: 0.0214 - mse: 0.0214 - mae: 0.1128 - val_loss: 0.0567 - val_mse: 0.0567 - val_mae: 0.1724 - lr: 0.0010 - 171ms/epoch - 4ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.05274
43/43 - 0s - loss: 0.0199 - mse: 0.0199 - mae: 0.1114 - val_loss: 0.0581 - val_mse: 0.0581 - val_mae: 0.1710 - lr: 0.0010 - 173ms/epoch - 4ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.05274
43/43 - 0s - loss: 0.0192 - mse: 0.0192 - mae: 0.1094 - val_loss: 0.0548 - val_mse: 0.0548 - val_mae: 0.1643 - lr: 0.0010 - 180ms/epoch - 4ms/step
Epoch 6/500

Epoch 00006: val_loss did not improve from 0.05274
43/43 - 0s - loss: 0.0176 - mse: 0.0176 - mae: 0.1045 - val_loss: 0.0582 - val_mse: 0.0582 - val_mae: 0.1683 - lr: 0.0010 - 164ms/epoch - 4ms/step
Epoch 7/500

Epoch 00007: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00007: val_loss did not improve from 0.05274
43/43 - 0s - loss: 0.0187 - mse: 0.0187 - mae: 0.1111 - val_loss: 0.0579 - val_mse: 0.0579 - val_mae: 0.1680 - lr: 0.0010 - 156ms/epoch - 4ms/step
Epoch 8/500

Epoch 00008: val_loss improved from 0.05274 to 0.03618, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0386 - mse: 0.0386 - mae: 0.1606 - val_loss: 0.0362 - val_mse: 0.0362 - val_mae: 0.1372 - lr: 1.0000e-04 - 178ms/epoch - 4ms/step
Epoch 9/500

Epoch 00009: val_loss improved from 0.03618 to 0.03334, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0114 - mse: 0.0114 - mae: 0.0866 - val_loss: 0.0333 - val_mse: 0.0333 - val_mae: 0.1355 - lr: 1.0000e-04 - 179ms/epoch - 4ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.03334
43/43 - 0s - loss: 0.0089 - mse: 0.0089 - mae: 0.0771 - val_loss: 0.0336 - val_mse: 0.0336 - val_mae: 0.1359 - lr: 1.0000e-04 - 181ms/epoch - 4ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.03334
43/43 - 0s - loss: 0.0085 - mse: 0.0085 - mae: 0.0752 - val_loss: 0.0352 - val_mse: 0.0352 - val_mae: 0.1373 - lr: 1.0000e-04 - 170ms/epoch - 4ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.03334
43/43 - 0s - loss: 0.0080 - mse: 0.0080 - mae: 0.0728 - val_loss: 0.0366 - val_mse: 0.0366 - val_mae: 0.1391 - lr: 1.0000e-04 - 169ms/epoch - 4ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.03334
43/43 - 0s - loss: 0.0079 - mse: 0.0079 - mae: 0.0714 - val_loss: 0.0378 - val_mse: 0.0378 - val_mae: 0.1407 - lr: 1.0000e-04 - 170ms/epoch - 4ms/step
Epoch 14/500

Epoch 00014: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00014: val_loss did not improve from 0.03334
43/43 - 0s - loss: 0.0077 - mse: 0.0077 - mae: 0.0706 - val_loss: 0.0387 - val_mse: 0.0387 - val_mae: 0.1423 - lr: 1.0000e-04 - 159ms/epoch - 4ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.03334
43/43 - 0s - loss: 0.0070 - mse: 0.0070 - mae: 0.0666 - val_loss: 0.0387 - val_mse: 0.0387 - val_mae: 0.1422 - lr: 1.0000e-05 - 157ms/epoch - 4ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.03334
43/43 - 0s - loss: 0.0070 - mse: 0.0070 - mae: 0.0669 - val_loss: 0.0387 - val_mse: 0.0387 - val_mae: 0.1423 - lr: 1.0000e-05 - 184ms/epoch - 4ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.03334
43/43 - 0s - loss: 0.0069 - mse: 0.0069 - mae: 0.0669 - val_loss: 0.0387 - val_mse: 0.0387 - val_mae: 0.1423 - lr: 1.0000e-05 - 171ms/epoch - 4ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.03334
43/43 - 0s - loss: 0.0072 - mse: 0.0072 - mae: 0.0685 - val_loss: 0.0387 - val_mse: 0.0387 - val_mae: 0.1424 - lr: 1.0000e-05 - 172ms/epoch - 4ms/step
Epoch 19/500

Epoch 00019: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00019: val_loss did not improve from 0.03334
43/43 - 0s - loss: 0.0073 - mse: 0.0073 - mae: 0.0694 - val_loss: 0.0387 - val_mse: 0.0387 - val_mae: 0.1424 - lr: 1.0000e-05 - 171ms/epoch - 4ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.03334
43/43 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0636 - val_loss: 0.0388 - val_mse: 0.0388 - val_mae: 0.1426 - lr: 1.0000e-05 - 162ms/epoch - 4ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.03334
43/43 - 0s - loss: 0.0072 - mse: 0.0072 - mae: 0.0676 - val_loss: 0.0389 - val_mse: 0.0389 - val_mae: 0.1427 - lr: 1.0000e-05 - 169ms/epoch - 4ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.03334
43/43 - 0s - loss: 0.0074 - mse: 0.0074 - mae: 0.0694 - val_loss: 0.0390 - val_mse: 0.0390 - val_mae: 0.1428 - lr: 1.0000e-05 - 173ms/epoch - 4ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.03334
43/43 - 0s - loss: 0.0070 - mse: 0.0070 - mae: 0.0675 - val_loss: 0.0390 - val_mse: 0.0390 - val_mae: 0.1428 - lr: 1.0000e-05 - 165ms/epoch - 4ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.03334
43/43 - 0s - loss: 0.0066 - mse: 0.0066 - mae: 0.0649 - val_loss: 0.0390 - val_mse: 0.0390 - val_mae: 0.1429 - lr: 1.0000e-05 - 178ms/epoch - 4ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.03334
43/43 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0626 - val_loss: 0.0391 - val_mse: 0.0391 - val_mae: 0.1431 - lr: 1.0000e-05 - 169ms/epoch - 4ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.03334
43/43 - 0s - loss: 0.0069 - mse: 0.0069 - mae: 0.0663 - val_loss: 0.0393 - val_mse: 0.0393 - val_mae: 0.1433 - lr: 1.0000e-05 - 164ms/epoch - 4ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.03334
43/43 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0621 - val_loss: 0.0395 - val_mse: 0.0395 - val_mae: 0.1435 - lr: 1.0000e-05 - 180ms/epoch - 4ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.03334
43/43 - 0s - loss: 0.0072 - mse: 0.0072 - mae: 0.0684 - val_loss: 0.0395 - val_mse: 0.0395 - val_mae: 0.1436 - lr: 1.0000e-05 - 181ms/epoch - 4ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.03334
43/43 - 0s - loss: 0.0068 - mse: 0.0068 - mae: 0.0660 - val_loss: 0.0396 - val_mse: 0.0396 - val_mae: 0.1437 - lr: 1.0000e-05 - 168ms/epoch - 4ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.03334
43/43 - 0s - loss: 0.0068 - mse: 0.0068 - mae: 0.0654 - val_loss: 0.0397 - val_mse: 0.0397 - val_mae: 0.1439 - lr: 1.0000e-05 - 166ms/epoch - 4ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.03334
43/43 - 0s - loss: 0.0068 - mse: 0.0068 - mae: 0.0655 - val_loss: 0.0398 - val_mse: 0.0398 - val_mae: 0.1440 - lr: 1.0000e-05 - 164ms/epoch - 4ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.03334
43/43 - 0s - loss: 0.0069 - mse: 0.0069 - mae: 0.0663 - val_loss: 0.0399 - val_mse: 0.0399 - val_mae: 0.1441 - lr: 1.0000e-05 - 162ms/epoch - 4ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.03334
43/43 - 0s - loss: 0.0066 - mse: 0.0066 - mae: 0.0644 - val_loss: 0.0401 - val_mse: 0.0401 - val_mae: 0.1444 - lr: 1.0000e-05 - 181ms/epoch - 4ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.03334
43/43 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0641 - val_loss: 0.0403 - val_mse: 0.0403 - val_mae: 0.1446 - lr: 1.0000e-05 - 167ms/epoch - 4ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.03334
43/43 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0634 - val_loss: 0.0404 - val_mse: 0.0404 - val_mae: 0.1448 - lr: 1.0000e-05 - 166ms/epoch - 4ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.03334
43/43 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0629 - val_loss: 0.0403 - val_mse: 0.0403 - val_mae: 0.1448 - lr: 1.0000e-05 - 167ms/epoch - 4ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.03334
43/43 - 0s - loss: 0.0069 - mse: 0.0069 - mae: 0.0657 - val_loss: 0.0402 - val_mse: 0.0402 - val_mae: 0.1447 - lr: 1.0000e-05 - 174ms/epoch - 4ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.03334
43/43 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0643 - val_loss: 0.0401 - val_mse: 0.0401 - val_mae: 0.1445 - lr: 1.0000e-05 - 162ms/epoch - 4ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.03334
43/43 - 0s - loss: 0.0066 - mse: 0.0066 - mae: 0.0657 - val_loss: 0.0402 - val_mse: 0.0402 - val_mae: 0.1447 - lr: 1.0000e-05 - 182ms/epoch - 4ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.03334
43/43 - 0s - loss: 0.0067 - mse: 0.0067 - mae: 0.0657 - val_loss: 0.0404 - val_mse: 0.0404 - val_mae: 0.1450 - lr: 1.0000e-05 - 176ms/epoch - 4ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.03334
43/43 - 0s - loss: 0.0066 - mse: 0.0066 - mae: 0.0650 - val_loss: 0.0406 - val_mse: 0.0406 - val_mae: 0.1452 - lr: 1.0000e-05 - 171ms/epoch - 4ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.03334
43/43 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0634 - val_loss: 0.0408 - val_mse: 0.0408 - val_mae: 0.1455 - lr: 1.0000e-05 - 167ms/epoch - 4ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.03334
43/43 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0651 - val_loss: 0.0409 - val_mse: 0.0409 - val_mae: 0.1455 - lr: 1.0000e-05 - 163ms/epoch - 4ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.03334
43/43 - 0s - loss: 0.0066 - mse: 0.0066 - mae: 0.0655 - val_loss: 0.0408 - val_mse: 0.0408 - val_mae: 0.1455 - lr: 1.0000e-05 - 170ms/epoch - 4ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.03334
43/43 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0632 - val_loss: 0.0409 - val_mse: 0.0409 - val_mae: 0.1456 - lr: 1.0000e-05 - 168ms/epoch - 4ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.03334
43/43 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0635 - val_loss: 0.0409 - val_mse: 0.0409 - val_mae: 0.1456 - lr: 1.0000e-05 - 166ms/epoch - 4ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.03334
43/43 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0617 - val_loss: 0.0408 - val_mse: 0.0408 - val_mae: 0.1455 - lr: 1.0000e-05 - 160ms/epoch - 4ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.03334
43/43 - 0s - loss: 0.0070 - mse: 0.0070 - mae: 0.0671 - val_loss: 0.0408 - val_mse: 0.0408 - val_mae: 0.1455 - lr: 1.0000e-05 - 163ms/epoch - 4ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.03334
43/43 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0616 - val_loss: 0.0410 - val_mse: 0.0410 - val_mae: 0.1458 - lr: 1.0000e-05 - 157ms/epoch - 4ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.03334
43/43 - 0s - loss: 0.0067 - mse: 0.0067 - mae: 0.0664 - val_loss: 0.0410 - val_mse: 0.0410 - val_mae: 0.1457 - lr: 1.0000e-05 - 171ms/epoch - 4ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.03334
43/43 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0628 - val_loss: 0.0410 - val_mse: 0.0410 - val_mae: 0.1458 - lr: 1.0000e-05 - 189ms/epoch - 4ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.03334
43/43 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0634 - val_loss: 0.0412 - val_mse: 0.0412 - val_mae: 0.1461 - lr: 1.0000e-05 - 169ms/epoch - 4ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.03334
43/43 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0646 - val_loss: 0.0414 - val_mse: 0.0414 - val_mae: 0.1463 - lr: 1.0000e-05 - 164ms/epoch - 4ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.03334
43/43 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0642 - val_loss: 0.0413 - val_mse: 0.0413 - val_mae: 0.1462 - lr: 1.0000e-05 - 158ms/epoch - 4ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.03334
43/43 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0635 - val_loss: 0.0414 - val_mse: 0.0414 - val_mae: 0.1463 - lr: 1.0000e-05 - 166ms/epoch - 4ms/step
Epoch 56/500

Epoch 00056: val_loss did not improve from 0.03334
43/43 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0615 - val_loss: 0.0416 - val_mse: 0.0416 - val_mae: 0.1466 - lr: 1.0000e-05 - 162ms/epoch - 4ms/step
Epoch 57/500

Epoch 00057: val_loss did not improve from 0.03334
43/43 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0615 - val_loss: 0.0417 - val_mse: 0.0417 - val_mae: 0.1468 - lr: 1.0000e-05 - 171ms/epoch - 4ms/step
Epoch 58/500

Epoch 00058: val_loss did not improve from 0.03334
43/43 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0605 - val_loss: 0.0420 - val_mse: 0.0420 - val_mae: 0.1472 - lr: 1.0000e-05 - 167ms/epoch - 4ms/step
Epoch 59/500

Epoch 00059: val_loss did not improve from 0.03334
43/43 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0634 - val_loss: 0.0423 - val_mse: 0.0423 - val_mae: 0.1476 - lr: 1.0000e-05 - 171ms/epoch - 4ms/step
Epoch 00059: early stopping
SMA
Prediction vs Close:		51.87% Accuracy
Prediction vs Prediction:	48.88% Accuracy
MSE:	 37.148636819682245 
RMSE:	 6.094968155756209 
MAPE:	 5.090179331518223

EMA
Prediction vs Close:		55.22% Accuracy
Prediction vs Prediction:	47.39% Accuracy
MSE:	 31.582654654902484 
RMSE:	 5.619844718041815 
MAPE:	 4.507182634072088

WMA
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	48.88% Accuracy
MSE:	 65.00101872981564 
RMSE:	 8.062320926992156 
MAPE:	 6.705711592581163

DEMA
Prediction vs Close:		52.24% Accuracy
Prediction vs Prediction:	45.52% Accuracy
MSE:	 35.269002386685244 
RMSE:	 5.938771117553298 
MAPE:	 4.62878838931535

KAMA
Prediction vs Close:		54.1% Accuracy
Prediction vs Prediction:	48.88% Accuracy
MSE:	 62.25156682504816 
RMSE:	 7.8899662119078915 
MAPE:	 6.222956717810362

MIDPOINT
Prediction vs Close:		51.12% Accuracy
Prediction vs Prediction:	42.54% Accuracy
MSE:	 84.47531990876952 
RMSE:	 9.191045637399997 
MAPE:	 7.890202393641488

T3
Prediction vs Close:		54.85% Accuracy
Prediction vs Prediction:	46.27% Accuracy
MSE:	 92.6070950307407 
RMSE:	 9.623258025780078 
MAPE:	 8.212466968891306
TEMA
TEMA([input_arrays], [timeperiod=30])

Triple Exponential Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
9

Working on TEMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-16493.570, Time=2.59 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-15527.581, Time=7.51 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-16154.477, Time=7.44 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-15134.948, Time=6.94 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=-16538.454, Time=8.24 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=-16271.346, Time=2.18 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=-16350.992, Time=13.28 sec
/usr/local/lib/python3.7/dist-packages/statsmodels/tsa/statespace/sarimax.py:1906: RuntimeWarning: divide by zero encountered in reciprocal
  return np.roots(self.polynomial_reduced_ma)**-1
 ARIMA(2,3,2)(0,0,0)[0]             : AIC=-16200.149, Time=3.30 sec
 ARIMA(1,3,2)(0,0,0)[0]             : AIC=-16461.809, Time=15.38 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=-16384.147, Time=3.00 sec
 ARIMA(3,3,2)(0,0,0)[0]             : AIC=inf, Time=10.18 sec
 ARIMA(2,3,1)(0,0,0)[0] intercept   : AIC=-15110.164, Time=5.74 sec

Best model:  ARIMA(2,3,1)(0,0,0)[0]          
Total fit time: 85.805 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(2, 3, 1)   Log Likelihood                8294.227
Date:                Sun, 12 Dec 2021   AIC                         -16538.454
Time:                        15:40:52   BIC                         -16421.183
Sample:                             0   HQIC                        -16493.417
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1          3.591e-07      0.001      0.000      1.000      -0.002       0.002
x2            3.6e-07      0.002      0.000      1.000      -0.003       0.003
x3          3.611e-07      0.001      0.000      1.000      -0.002       0.002
x4             1.0000      0.000   2628.605      0.000       0.999       1.001
x5          3.432e-07      0.000      0.001      0.999      -0.001       0.001
x6          1.714e-07   4.05e-05      0.004      0.997   -7.91e-05    7.95e-05
x7          3.541e-07      0.001      0.000      1.000      -0.003       0.003
x8            -0.0002      0.000     -1.006      0.315      -0.001       0.000
x9         -7.559e-08      0.000     -0.000      1.000      -0.001       0.001
x10            0.0001      0.000      0.492      0.623      -0.000       0.001
x11           -0.0006      0.000     -2.697      0.007      -0.001      -0.000
x12            0.0005      0.000      1.741      0.082   -5.97e-05       0.001
x13           3.6e-07      0.000      0.002      0.999      -0.000       0.000
x14         1.003e-06      0.001      0.001      0.999      -0.002       0.002
x15         3.506e-07   7.16e-05      0.005      0.996      -0.000       0.000
x16         5.157e-07      0.000      0.005      0.996      -0.000       0.000
x17         3.516e-07   6.59e-05      0.005      0.996      -0.000       0.000
x18         1.166e-07      0.000      0.001      1.000      -0.000       0.000
x19         3.922e-07    7.5e-05      0.005      0.996      -0.000       0.000
x20         -3.64e-07      0.000     -0.002      0.999      -0.000       0.000
x21         4.458e-07      0.000      0.004      0.997      -0.000       0.000
ar.L1         -0.4019   4.12e-05  -9758.484      0.000      -0.402      -0.402
ar.L2         -0.1006   1.58e-05  -6360.873      0.000      -0.101      -0.101
ma.L1         -0.7963   8.45e-06  -9.43e+04      0.000      -0.796      -0.796
sigma2      9.048e-11    7.2e-11      1.257      0.209   -5.06e-11    2.32e-10
===================================================================================
Ljung-Box (L1) (Q):                  64.02   Jarque-Bera (JB):           4424775.47
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                             5.53
Prob(H) (two-sided):                  0.00   Kurtosis:                       366.04
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 1.02e+20. Standard errors may be unstable.
ARIMA order: (2, 3, 1) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.06351, saving model to LSTM7.h5
90/90 - 2s - loss: 0.1164 - mse: 0.1164 - mae: 0.2369 - val_loss: 0.0635 - val_mse: 0.0635 - val_mae: 0.2342 - lr: 0.0010 - 2s/epoch - 26ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.06351
90/90 - 0s - loss: 0.0217 - mse: 0.0217 - mae: 0.1164 - val_loss: 0.0752 - val_mse: 0.0752 - val_mae: 0.2599 - lr: 0.0010 - 322ms/epoch - 4ms/step
Epoch 3/500

Epoch 00003: val_loss improved from 0.06351 to 0.03991, saving model to LSTM7.h5
90/90 - 0s - loss: 0.0146 - mse: 0.0146 - mae: 0.0949 - val_loss: 0.0399 - val_mse: 0.0399 - val_mae: 0.1852 - lr: 0.0010 - 333ms/epoch - 4ms/step
Epoch 4/500

Epoch 00004: val_loss improved from 0.03991 to 0.02556, saving model to LSTM7.h5
90/90 - 0s - loss: 0.0109 - mse: 0.0109 - mae: 0.0820 - val_loss: 0.0256 - val_mse: 0.0256 - val_mae: 0.1452 - lr: 0.0010 - 334ms/epoch - 4ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.02556
90/90 - 0s - loss: 0.0072 - mse: 0.0072 - mae: 0.0674 - val_loss: 0.0340 - val_mse: 0.0340 - val_mae: 0.1718 - lr: 0.0010 - 321ms/epoch - 4ms/step
Epoch 6/500

Epoch 00006: val_loss improved from 0.02556 to 0.01074, saving model to LSTM7.h5
90/90 - 0s - loss: 0.0091 - mse: 0.0091 - mae: 0.0717 - val_loss: 0.0107 - val_mse: 0.0107 - val_mae: 0.0886 - lr: 0.0010 - 322ms/epoch - 4ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.01074
90/90 - 0s - loss: 0.0113 - mse: 0.0113 - mae: 0.0827 - val_loss: 0.0345 - val_mse: 0.0345 - val_mae: 0.1738 - lr: 0.0010 - 324ms/epoch - 4ms/step
Epoch 8/500

Epoch 00008: val_loss improved from 0.01074 to 0.00726, saving model to LSTM7.h5
90/90 - 0s - loss: 0.0197 - mse: 0.0197 - mae: 0.1079 - val_loss: 0.0073 - val_mse: 0.0073 - val_mae: 0.0705 - lr: 0.0010 - 330ms/epoch - 4ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.00726
90/90 - 0s - loss: 0.0265 - mse: 0.0265 - mae: 0.1264 - val_loss: 0.0524 - val_mse: 0.0524 - val_mae: 0.2111 - lr: 0.0010 - 324ms/epoch - 4ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.00726
90/90 - 0s - loss: 0.0165 - mse: 0.0165 - mae: 0.0981 - val_loss: 0.0132 - val_mse: 0.0132 - val_mae: 0.0965 - lr: 0.0010 - 330ms/epoch - 4ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.00726
90/90 - 0s - loss: 0.0090 - mse: 0.0090 - mae: 0.0742 - val_loss: 0.0873 - val_mse: 0.0873 - val_mae: 0.2716 - lr: 0.0010 - 313ms/epoch - 3ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.00726
90/90 - 0s - loss: 0.0068 - mse: 0.0068 - mae: 0.0619 - val_loss: 0.0194 - val_mse: 0.0194 - val_mae: 0.1182 - lr: 0.0010 - 323ms/epoch - 4ms/step
Epoch 13/500

Epoch 00013: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00013: val_loss did not improve from 0.00726
90/90 - 0s - loss: 0.0068 - mse: 0.0068 - mae: 0.0618 - val_loss: 0.1000 - val_mse: 0.1000 - val_mae: 0.2899 - lr: 0.0010 - 314ms/epoch - 3ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.00726
90/90 - 0s - loss: 0.0102 - mse: 0.0102 - mae: 0.0809 - val_loss: 0.0546 - val_mse: 0.0546 - val_mae: 0.2069 - lr: 1.0000e-04 - 306ms/epoch - 3ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.00726
90/90 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0528 - val_loss: 0.0485 - val_mse: 0.0485 - val_mae: 0.1932 - lr: 1.0000e-04 - 325ms/epoch - 4ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.00726
90/90 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0543 - val_loss: 0.0455 - val_mse: 0.0455 - val_mae: 0.1859 - lr: 1.0000e-04 - 313ms/epoch - 3ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.00726
90/90 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0513 - val_loss: 0.0446 - val_mse: 0.0446 - val_mae: 0.1836 - lr: 1.0000e-04 - 327ms/epoch - 4ms/step
Epoch 18/500

Epoch 00018: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00018: val_loss did not improve from 0.00726
90/90 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0539 - val_loss: 0.0432 - val_mse: 0.0432 - val_mae: 0.1800 - lr: 1.0000e-04 - 310ms/epoch - 3ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.00726
90/90 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0478 - val_loss: 0.0425 - val_mse: 0.0425 - val_mae: 0.1782 - lr: 1.0000e-05 - 321ms/epoch - 4ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.00726
90/90 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0512 - val_loss: 0.0417 - val_mse: 0.0417 - val_mae: 0.1763 - lr: 1.0000e-05 - 331ms/epoch - 4ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.00726
90/90 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0503 - val_loss: 0.0416 - val_mse: 0.0416 - val_mae: 0.1760 - lr: 1.0000e-05 - 310ms/epoch - 3ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.00726
90/90 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0501 - val_loss: 0.0408 - val_mse: 0.0408 - val_mae: 0.1740 - lr: 1.0000e-05 - 313ms/epoch - 3ms/step
Epoch 23/500

Epoch 00023: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00023: val_loss did not improve from 0.00726
90/90 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0503 - val_loss: 0.0405 - val_mse: 0.0405 - val_mae: 0.1734 - lr: 1.0000e-05 - 315ms/epoch - 3ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.00726
90/90 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0489 - val_loss: 0.0403 - val_mse: 0.0403 - val_mae: 0.1727 - lr: 1.0000e-05 - 329ms/epoch - 4ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.00726
90/90 - 0s - loss: 0.0037 - mse: 0.0037 - mae: 0.0473 - val_loss: 0.0400 - val_mse: 0.0400 - val_mae: 0.1720 - lr: 1.0000e-05 - 315ms/epoch - 4ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.00726
90/90 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0522 - val_loss: 0.0400 - val_mse: 0.0400 - val_mae: 0.1719 - lr: 1.0000e-05 - 310ms/epoch - 3ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.00726
90/90 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0500 - val_loss: 0.0397 - val_mse: 0.0397 - val_mae: 0.1712 - lr: 1.0000e-05 - 312ms/epoch - 3ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.00726
90/90 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0490 - val_loss: 0.0398 - val_mse: 0.0398 - val_mae: 0.1713 - lr: 1.0000e-05 - 309ms/epoch - 3ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.00726
90/90 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0488 - val_loss: 0.0398 - val_mse: 0.0398 - val_mae: 0.1713 - lr: 1.0000e-05 - 317ms/epoch - 4ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.00726
90/90 - 0s - loss: 0.0037 - mse: 0.0037 - mae: 0.0475 - val_loss: 0.0402 - val_mse: 0.0402 - val_mae: 0.1723 - lr: 1.0000e-05 - 315ms/epoch - 3ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.00726
90/90 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0502 - val_loss: 0.0401 - val_mse: 0.0401 - val_mae: 0.1721 - lr: 1.0000e-05 - 320ms/epoch - 4ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.00726
90/90 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0501 - val_loss: 0.0405 - val_mse: 0.0405 - val_mae: 0.1729 - lr: 1.0000e-05 - 333ms/epoch - 4ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.00726
90/90 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0489 - val_loss: 0.0407 - val_mse: 0.0407 - val_mae: 0.1735 - lr: 1.0000e-05 - 317ms/epoch - 4ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.00726
90/90 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0484 - val_loss: 0.0409 - val_mse: 0.0409 - val_mae: 0.1739 - lr: 1.0000e-05 - 319ms/epoch - 4ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.00726
90/90 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0488 - val_loss: 0.0406 - val_mse: 0.0406 - val_mae: 0.1730 - lr: 1.0000e-05 - 320ms/epoch - 4ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.00726
90/90 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0483 - val_loss: 0.0407 - val_mse: 0.0407 - val_mae: 0.1732 - lr: 1.0000e-05 - 313ms/epoch - 3ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.00726
90/90 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0509 - val_loss: 0.0409 - val_mse: 0.0409 - val_mae: 0.1737 - lr: 1.0000e-05 - 307ms/epoch - 3ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.00726
90/90 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0508 - val_loss: 0.0413 - val_mse: 0.0413 - val_mae: 0.1747 - lr: 1.0000e-05 - 316ms/epoch - 4ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.00726
90/90 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0479 - val_loss: 0.0414 - val_mse: 0.0414 - val_mae: 0.1749 - lr: 1.0000e-05 - 323ms/epoch - 4ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.00726
90/90 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0485 - val_loss: 0.0410 - val_mse: 0.0410 - val_mae: 0.1738 - lr: 1.0000e-05 - 312ms/epoch - 3ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.00726
90/90 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0482 - val_loss: 0.0413 - val_mse: 0.0413 - val_mae: 0.1745 - lr: 1.0000e-05 - 307ms/epoch - 3ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.00726
90/90 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0497 - val_loss: 0.0415 - val_mse: 0.0415 - val_mae: 0.1750 - lr: 1.0000e-05 - 320ms/epoch - 4ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.00726
90/90 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0478 - val_loss: 0.0415 - val_mse: 0.0415 - val_mae: 0.1747 - lr: 1.0000e-05 - 309ms/epoch - 3ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.00726
90/90 - 0s - loss: 0.0037 - mse: 0.0037 - mae: 0.0480 - val_loss: 0.0414 - val_mse: 0.0414 - val_mae: 0.1745 - lr: 1.0000e-05 - 321ms/epoch - 4ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.00726
90/90 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0487 - val_loss: 0.0413 - val_mse: 0.0413 - val_mae: 0.1743 - lr: 1.0000e-05 - 322ms/epoch - 4ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.00726
90/90 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0507 - val_loss: 0.0408 - val_mse: 0.0408 - val_mae: 0.1730 - lr: 1.0000e-05 - 315ms/epoch - 3ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.00726
90/90 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0478 - val_loss: 0.0409 - val_mse: 0.0409 - val_mae: 0.1733 - lr: 1.0000e-05 - 306ms/epoch - 3ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.00726
90/90 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0480 - val_loss: 0.0405 - val_mse: 0.0405 - val_mae: 0.1723 - lr: 1.0000e-05 - 319ms/epoch - 4ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.00726
90/90 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0492 - val_loss: 0.0412 - val_mse: 0.0412 - val_mae: 0.1738 - lr: 1.0000e-05 - 323ms/epoch - 4ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.00726
90/90 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0504 - val_loss: 0.0416 - val_mse: 0.0416 - val_mae: 0.1747 - lr: 1.0000e-05 - 325ms/epoch - 4ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.00726
90/90 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0499 - val_loss: 0.0421 - val_mse: 0.0421 - val_mae: 0.1758 - lr: 1.0000e-05 - 311ms/epoch - 3ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.00726
90/90 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0482 - val_loss: 0.0426 - val_mse: 0.0426 - val_mae: 0.1770 - lr: 1.0000e-05 - 318ms/epoch - 4ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.00726
90/90 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0484 - val_loss: 0.0432 - val_mse: 0.0432 - val_mae: 0.1784 - lr: 1.0000e-05 - 330ms/epoch - 4ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.00726
90/90 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0465 - val_loss: 0.0432 - val_mse: 0.0432 - val_mae: 0.1782 - lr: 1.0000e-05 - 330ms/epoch - 4ms/step
Epoch 55/500

Epoch 00055: val_loss did not improve from 0.00726
90/90 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0494 - val_loss: 0.0426 - val_mse: 0.0426 - val_mae: 0.1768 - lr: 1.0000e-05 - 310ms/epoch - 3ms/step
Epoch 56/500

Epoch 00056: val_loss did not improve from 0.00726
90/90 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0485 - val_loss: 0.0432 - val_mse: 0.0432 - val_mae: 0.1782 - lr: 1.0000e-05 - 314ms/epoch - 3ms/step
Epoch 57/500

Epoch 00057: val_loss did not improve from 0.00726
90/90 - 0s - loss: 0.0037 - mse: 0.0037 - mae: 0.0473 - val_loss: 0.0426 - val_mse: 0.0426 - val_mae: 0.1766 - lr: 1.0000e-05 - 318ms/epoch - 4ms/step
Epoch 58/500

Epoch 00058: val_loss did not improve from 0.00726
90/90 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0493 - val_loss: 0.0435 - val_mse: 0.0435 - val_mae: 0.1787 - lr: 1.0000e-05 - 340ms/epoch - 4ms/step
Epoch 00058: early stopping
SMA
Prediction vs Close:		51.87% Accuracy
Prediction vs Prediction:	48.88% Accuracy
MSE:	 37.148636819682245 
RMSE:	 6.094968155756209 
MAPE:	 5.090179331518223

EMA
Prediction vs Close:		55.22% Accuracy
Prediction vs Prediction:	47.39% Accuracy
MSE:	 31.582654654902484 
RMSE:	 5.619844718041815 
MAPE:	 4.507182634072088

WMA
Prediction vs Close:		52.99% Accuracy
Prediction vs Prediction:	48.88% Accuracy
MSE:	 65.00101872981564 
RMSE:	 8.062320926992156 
MAPE:	 6.705711592581163

DEMA
Prediction vs Close:		52.24% Accuracy
Prediction vs Prediction:	45.52% Accuracy
MSE:	 35.269002386685244 
RMSE:	 5.938771117553298 
MAPE:	 4.62878838931535

KAMA
Prediction vs Close:		54.1% Accuracy
Prediction vs Prediction:	48.88% Accuracy
MSE:	 62.25156682504816 
RMSE:	 7.8899662119078915 
MAPE:	 6.222956717810362

MIDPOINT
Prediction vs Close:		51.12% Accuracy
Prediction vs Prediction:	42.54% Accuracy
MSE:	 84.47531990876952 
RMSE:	 9.191045637399997 
MAPE:	 7.890202393641488

T3
Prediction vs Close:		54.85% Accuracy
Prediction vs Prediction:	46.27% Accuracy
MSE:	 92.6070950307407 
RMSE:	 9.623258025780078 
MAPE:	 8.212466968891306

TEMA
Prediction vs Close:		51.49% Accuracy
Prediction vs Prediction:	47.01% Accuracy
MSE:	 50.229608880159034 
RMSE:	 7.087285014740061 
MAPE:	 6.2877236827531835
Runtime: mins: 42.43883018304999

Architecture Used

In [ ]:
from google.colab import files
import cv2
uploaded = files.upload()
Upload widget is only available when the cell has been executed in the current browser session. Please rerun this cell to enable.
In [ ]:
img = cv2.imread('Experiment5.png')
plt.figure(figsize=(20,10))
plt.axis("off")
plt.title('LSTM Architecture '+imgfile,fontsize=18)
plt.imshow(img)
Out[ ]:
<matplotlib.image.AxesImage at 0x7fda6d2f7ed0>

Model Plots

In [83]:
with open('simulation7_data.json') as json_file:
    simulation7 = json.load(json_file)
fileimg = 'Experiment7'
In [84]:
for i in range(len(list(simulation7.keys()))):
  SIM = list(simulation7.keys())[i]
  plot_train(simulation7,SIM)
  plot_test(simulation7,SIM)
----- Train RMSE for SMA ----- 8.955057105534777
----- Train_MSE_LSTM for SMA ----- 80.19304776338892
----- Train MAE LSTM for SMA ----- 7.794913717002681
----- Test RMSE for SMA----- 6.094968155756209
----- Test_MSE_LSTM for SMA----- 37.148636819682245
----- Test_MAE_LSTM for SMA----- 5.090179331518223
----- Train RMSE for EMA ----- 10.61356917187827
----- Train_MSE_LSTM for EMA ----- 112.6478505662448
----- Train MAE LSTM for EMA ----- 9.447697131450768
----- Test RMSE for EMA----- 5.619844718041815
----- Test_MSE_LSTM for EMA----- 31.582654654902484
----- Test_MAE_LSTM for EMA----- 4.507182634072088
----- Train RMSE for WMA ----- 10.751551158526922
----- Train_MSE_LSTM for WMA ----- 115.59585231442159
----- Train MAE LSTM for WMA ----- 9.750787083557483
----- Test RMSE for WMA----- 8.062320926992156
----- Test_MSE_LSTM for WMA----- 65.00101872981564
----- Test_MAE_LSTM for WMA----- 6.705711592581163
----- Train RMSE for DEMA ----- 12.583936483896219
----- Train_MSE_LSTM for DEMA ----- 158.3554574307343
----- Train MAE LSTM for DEMA ----- 11.414769962937156
----- Test RMSE for DEMA----- 5.938771117553298
----- Test_MSE_LSTM for DEMA----- 35.269002386685244
----- Test_MAE_LSTM for DEMA----- 4.62878838931535
----- Train RMSE for KAMA ----- 10.911962583126925
----- Train_MSE_LSTM for KAMA ----- 119.07092741556205
----- Train MAE LSTM for KAMA ----- 9.89734832853964
----- Test RMSE for KAMA----- 7.8899662119078915
----- Test_MSE_LSTM for KAMA----- 62.25156682504816
----- Test_MAE_LSTM for KAMA----- 6.222956717810362
----- Train RMSE for MIDPOINT ----- 9.704593247771532
----- Train_MSE_LSTM for MIDPOINT ----- 94.17913010469283
----- Train MAE LSTM for MIDPOINT ----- 8.665870629654135
----- Test RMSE for MIDPOINT----- 9.191045637399997
----- Test_MSE_LSTM for MIDPOINT----- 84.47531990876952
----- Test_MAE_LSTM for MIDPOINT----- 7.890202393641488
----- Train RMSE for T3 ----- 12.326458508446358
----- Train_MSE_LSTM for T3 ----- 151.94157936044962
----- Train MAE LSTM for T3 ----- 11.126900240195694
----- Test RMSE for T3----- 9.623258025780078
----- Test_MSE_LSTM for T3----- 92.6070950307407
----- Test_MAE_LSTM for T3----- 8.212466968891306
----- Train RMSE for TEMA ----- 7.3831693218457195
----- Train_MSE_LSTM for TEMA ----- 54.51118923504378
----- Train MAE LSTM for TEMA ----- 5.0852937426664155
----- Test RMSE for TEMA----- 7.087285014740061
----- Test_MSE_LSTM for TEMA----- 50.229608880159034
----- Test_MAE_LSTM for TEMA----- 6.2877236827531835

Arima w Exogenous Variable Multistep MutiVariate LSTM Hybrid Model Experiment 8

In [ ]:
def get_arima_exog(dataframe,original_data, train_len, test_len):    
    

    # prepare train and test data for exogenous vr
    X_value = pd.DataFrame(low_vol.iloc[:, :])
    y_value = pd.DataFrame(low_vol.iloc[:, 3])
    X_scaler = MinMaxScaler(feature_range=(-1, 1))
    y_scaler = MinMaxScaler(feature_range=(-1, 1))
    X_scaler.fit(X_value)
    y_scaler.fit(y_value)
    X_scale_dataset = X_scaler.fit_transform(X_value)
    y_scale_dataset = y_scaler.fit_transform(y_value)
    # Get data and check shape
    # X, y, yc = get_X_y(X_scale_dataset, y_scale_dataset)#X will be of shape 224 X 3 X 21 (each 3 X 21 array will be 3 days' worth of data). yc will have the corresponding closing price value
    # pdb.set_trace()
    X_train, X_test, = split_train_test(X_scale_dataset)
    y_train, y_test, = split_train_test(y_scale_dataset)
    yc_train,yc_test = split_train_test(low_vol_data)
    yc = yc_test.values.tolist()
    y_train_list = y_train.flatten().tolist()
    y_test_list = y_test.flatten().tolist()
    # yc_train, yc_test, = split_train_test(original_data)
    index_train, index_test, = predict_index(dataset_final, X_train, n_steps_in, n_steps_out)

    # Initialize model
    model = auto_arima(y_train_list,exogenous  = X_train,trace=True, error_action='ignore', start_p=1,start_q=1,max_p=3,max_q=3,d=3,
            suppress_warnings=True,stepwise=True,seasonal=True)

      # Determine model parameters
    print(model.summary())
    model.fit(y_train_list,maxiter=200)
    order = model.get_params()['order']
    print('ARIMA order:', order, '\n')

      # Genereate predictions
    prediction = []
    for i in range(len(y_test_list)):
        model = pmdarima.ARIMA(order=order)
        model.fit(y_train_list)
        # print('working on', i+1, 'of', len(y_test), '-- ' + str(int(100 * (i + 1) / len(y_test))) + '% complete')

        prediction.append(model.predict()[0])
        y_train_list.append(y_test_list[i])

    predictionte = y_scaler.inverse_transform(np.array(prediction).reshape(-1,1))
    y_test_ = y_scaler.inverse_transform(np.array(y_test_list).reshape(-1,1))

    # Generate error data
    mse = mean_squared_error(yc_test, predictionte)
    rmse = mse ** 0.5
    mae = mean_absolute_error(y_test_ , predictionte )
    return yc,predictionte.flatten().tolist(), mse, rmse, mae
In [ ]:
def get_lstm(data,original_data, train_len, test_len,img_file,ma ,lstm_len=3):
    # prepare train and test data
    X_value = pd.DataFrame(data.iloc[:, :])
    y_value = pd.DataFrame(data.iloc[:, 3])
    X_scaler = MinMaxScaler(feature_range=(-1, 1))
    y_scaler = MinMaxScaler(feature_range=(-1, 1))
    X_scaler.fit(X_value)
    y_scaler.fit(y_value)
    # Get data and check shape
    X, y, yc = get_X_y(X_scale_dataset, y_scale_dataset)#X will be of shape 224 X 3 X 21 (each 3 X 21 array will be 3 days' worth of data). yc will have the corresponding closing price value
    # pdb.set_trace()
    X_train, X_test, = split_train_test(X)
    y_train, y_test, = split_train_test(y)
    # yc_train, yc_test, = split_train_test(original_data)
    index_train, index_test, = predict_index(dataset_final, X_train, n_steps_in, n_steps_out)
    det =20
    input_dim = X_train.shape[1]#3
    feature_size = X_train.shape[2]#24
    output_dim = y_train.shape[1]#1



    # Option 1
    # Set up & fit LSTM RNN
    # model = Sequential()
    # model.add(LSTM(256, activation='relu', kernel_initializer='he_normal', input_shape=(input_dim, feature_size)))
    # model.add(Dense(units=64,activation='relu'))
    # model.add(Dropout(0.5))
    # model.add(Dense(units=output_dim))
    # model.compile(optimizer=Adam(learning_rate = 0.001), loss='mse')

    # ## Common code
    # callbacks = [
    # EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    # ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    # ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    # fname1 = img_file+'.png'
    # tensorflow.keras.utils.plot_model(
    #     model, to_file=fname1, show_shapes=True, show_dtype=False,
    #     show_layer_names=True, expand_nested=False, dpi=96,
    #     layer_range=None, show_layer_activations=False
    # )
    # history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # # plot loss
    # fname2 = img_file+'-'+ma
    # plt.title(img_file+'-'+ma_' Loss')
    # plt.xlabel("Epochs")
    # plt.ylabel("Loss")
    # pyplot.plot(history.history['loss'], label='train')
    # pyplot.plot(history.history['val_loss'], label='validation')
    # pyplot.legend()
    # pyplot.savefig(fname2+'.png',dpi='figure')
    # pyplot.show()


    # # # option 2
    # model = Sequential()
    # model.add(Bidirectional(LSTM(units= 128), input_shape=(input_dim, feature_size)))
    # model.add(Dense(64))
    # model.add(Dense(units=output_dim))
    # model.compile(optimizer=Adam(lr = 0.001), loss='mean_squared_error', metrics=['accuracy'])
    # # Common code
    # callbacks = [
    # EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    # ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    # ModelCheckpoint('LSTM7.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    # fname1 = img_file+'.png'
    # tensorflow.keras.utils.plot_model(
    #     model, to_file=fname1, show_shapes=True, show_dtype=False,
    #     show_layer_names=True, expand_nested=False, dpi=96,
    #     layer_range=None, show_layer_activations=False
    # )
    # history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # # plot loss
    # fname2 = img_file+'-'+ma
    # plt.title(img_file+'-'+ma+' Loss')
    # plt.xlabel("Epochs")
    # plt.ylabel("Loss")
    # pyplot.plot(history.history['loss'], label='train')
    # pyplot.plot(history.history['val_loss'], label='validation')
    # pyplot.legend()
    # pyplot.savefig(fname2+'.png',dpi='figure')
    # pyplot.show()

    # Option 3
    # define custom activation
    # 
    # class Double_Tanh(Activation):
    #     def __init__(self, activation, **kwargs):
    #         super(Double_Tanh, self).__init__(activation, **kwargs)
    #         self.__name__ = 'double_tanh'

    # def double_tanh(x):
    #     return (K.tanh(x) * 2)

    # get_custom_objects().update({'double_tanh':Double_Tanh(double_tanh)})
    #     # Model Generation
    # model = Sequential()
    # #check https://machinelearningmastery.com/use-weight-regularization-lstm-networks-time-series-forecasting/
    # model.add(LSTM(25, input_shape=(input_dim, feature_size), dropout=0.2, kernel_regularizer=l1_l2(0.00,0.00), bias_regularizer=l1_l2(0.00,0.00)))
    # model.add(Dense(1))
    # model.add(Activation(double_tanh))
    # model.compile(loss='mean_squared_error', optimizer='adam', metrics=['mse', 'mae'])
    # # Common code
    # callbacks = [
    # EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    # ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    # ModelCheckpoint('LSTM7.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    # fname1 = img_file+'.png'
    # tensorflow.keras.utils.plot_model(
    #     model, to_file=fname1, show_shapes=True, show_dtype=False,
    #     show_layer_names=True, expand_nested=False, dpi=96,
    #     layer_range=None, show_layer_activations=False
    # )
    # history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # # plot loss
    # fname2 = img_file+'-'+ma
    # plt.title(img_file+'-'+ma+' Loss')
    # plt.xlabel("Epochs")
    # plt.ylabel("Loss")
    # pyplot.plot(history.history['loss'], label='train')
    # pyplot.plot(history.history['val_loss'], label='validation')
    # pyplot.legend()
    # pyplot.savefig(fname2+'.png',dpi='figure')
    # pyplot.show()

    # #Option 4
    # # Set up & fit LSTM RNN
    model = Sequential()
    model.add(LSTM(units=lstm_len, return_sequences=True, input_shape=(input_dim, feature_size)))
    model.add(LSTM(units=int(lstm_len/2)))
    model.add(Dense(1, activation='sigmoid'))
    model.compile(loss='mean_squared_error', optimizer='adam')
    # Common code
    callbacks = [
    EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
    ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
    ModelCheckpoint('LSTM8.h5', verbose=1, save_best_only=True, save_weights_only=True)]
    fname1 = img_file+'.png'
    tensorflow.keras.utils.plot_model(
        model, to_file=fname1, show_shapes=True, show_dtype=False,
        show_layer_names=True, expand_nested=False, dpi=96,
        layer_range=None, show_layer_activations=False
    )
    history = model.fit(X_train, y_train, epochs=500, batch_size=int( optimized_period[ma]), verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
    # plot loss
    fname2 = img_file+'-'+ma
    plt.title(img_file+'-'+ma+' Loss')
    plt.xlabel("Epochs")
    plt.ylabel("Loss")
    pyplot.plot(history.history['loss'], label='train')
    pyplot.plot(history.history['val_loss'], label='validation')
    pyplot.legend()
    pyplot.savefig(fname2+'.png',dpi='figure')
    pyplot.show()



    # Generate predictions
    predictiontr = model.predict(X_train, verbose=0)
    predictiontr = y_scaler.inverse_transform(predictiontr).tolist()
    outputtr = []
    for i in range(len(predictiontr)):
        outputtr.extend(predictiontr[i])
    predictiontr = outputtr
    # Generate error data

    ## replace with yc , xtest generated by new multistep method
    mse_tr = mean_squared_error(y_train, predictiontr)
    rmse_tr = mse_tr ** 0.5
    # mape = mean_absolute_percentage_error(X_test, pd.Series(predictiontr))
    mae_tr = mean_absolute_error(y_train, pd.Series(predictiontr))
    # Original_tr = pd.Series(yc_train)
    Original_tr = y_scaler.inverse_transform(y_train).flatten().tolist()


    predictionte = model.predict(X_test, verbose=0)
    predictionte = (y_scaler.inverse_transform(predictionte)-det).tolist()
    outputte = []
    for i in range(len(predictionte)):
        outputte.extend(predictionte[i])
    predictionte = outputte
    # Generate error data

    mse_te = mean_squared_error(y_test, predictionte)
    rmse_te = mse_te ** 0.5
    # mape = mean_absolute_percentage_error(X_test, pd.Series(predictionte))
    mae_te = mean_absolute_error(y_test, pd.Series(predictionte))
    # Original_te = pd.Series(yc_test)
    Original_te = y_scaler.inverse_transform(y_test).flatten().tolist()

    return Original_tr, predictiontr, mse_tr, rmse_tr,mae_tr,Original_te,predictionte, mse_te,rmse_te,mae_te
In [ ]:
if __name__ == '__main__':
    start_time = timeit.default_timer()
    simulation8 = {}
    imgfile = 'Experiment8'
    for ma in optimized_period:
                print(ma)
                print(functions[ma])
                print ( int( optimized_period[ma]))
              # if ma == 'SMA':
                low_vol = df.apply(lambda c:  functions[ma](c, timeperiod = int( optimized_period[ma])))
                low_vol = low_vol.fillna(0)
                low_vol_data = df['close']
                high_vol = pd.DataFrame()
                df2 = df.copy()
                for i in df2.columns:
                  if i in low_vol.columns:
                    high_vol[i] = df2[i].subtract(low_vol[i], fill_value=0)
                high_vol_data = df['close']
                ## *****************************************************
                # Generate ARIMA and LSTM predictions
                print('\nWorking on ' + ma + ' predictions')
                try:
                  print('parameters used : ', train_len, test_len)
                  low_vol_Original, low_vol_prediction, low_vol_mse, low_vol_rmse,low_vol_mae = get_arima_exog(low_vol,low_vol_data, train_len, test_len)
                except:
                    print('ARIMA error, skipping to next MA type')
                    continue
                Original_tr, predictiontr, mse_tr, rmse_tr,mae_tr,high_vol_Original, high_vol_prediction, high_vol_mse, high_vol_rmse,high_vol_mae, = get_lstm(high_vol,high_vol_data, train_len, test_len,imgfile,ma)
                final_prediction_tr = df['close'].head(train_len).values + pd.Series(predictiontr) # ignoring first 3 steps 
                mse_ftr = mean_squared_error(df['close'].head(train_len).values,final_prediction_tr.values)
                rmse_ftr = mse_ftr ** 0.5
                mape_ftr = mean_absolute_percentage_error(df['close'].head(train_len).reset_index(drop=True), final_prediction_tr)
                mae_ftr = mean_absolute_error(df['close'].head(train_len).reset_index(drop=True), final_prediction_tr)

                final_prediction = pd.Series(low_vol_prediction[3:]) + pd.Series(high_vol_prediction)
                mse = mean_squared_error(df['close'].tail(test_len).values,final_prediction.values)
                rmse = mse ** 0.5
                mape = mean_absolute_percentage_error(df['close'].tail(test_len).reset_index(drop=True), final_prediction)
                mae = mean_absolute_error(df['close'].tail(test_len).reset_index(drop=True), final_prediction)
                # Generate prediction accuracy
                actual = df['close'].tail(test_len).values
                result_1 = []
                result_2 = []
                for i in range(1, len(final_prediction)):
                    # Compare prediction to previous close price
                    if final_prediction[i] > actual[i-1] and actual[i] > actual[i-1]:
                        result_1.append(1)
                    elif final_prediction[i] < actual[i-1] and actual[i] < actual[i-1]:
                        result_1.append(1)
                    else:
                        result_1.append(0)

                    # Compare prediction to previous prediction
                    if final_prediction[i] > final_prediction[i-1] and actual[i] > actual[i-1]:
                        result_2.append(1)
                    elif final_prediction[i] < final_prediction[i-1] and actual[i] < actual[i-1]:
                        result_2.append(1)
                    else:
                        result_2.append(0)

                accuracy_1 = np.mean(result_1)
                accuracy_2 = np.mean(result_2)

                simulation8[ma] = {'low_vol': {'original':list(low_vol_Original), 'prediction': list(low_vol_prediction) , 'mse': low_vol_mse,
                                              'rmse': low_vol_rmse, 'mae' : low_vol_mae},
                                  'high_vol': {'original':list(high_vol_Original),'prediction': list(high_vol_prediction), 'mse': high_vol_mse,
                                              'rmse': high_vol_rmse, 'mae' : high_vol_mae},
                                  'final_tr': {'original':df['close'].head(train_len).tolist(),'prediction': final_prediction_tr.values.tolist(), 'mse': mse_ftr,
                                              'rmse': rmse_ftr, 'mae' : mae_ftr},
                                  'final': {'original': df['close'].tail(test_len).tolist(), 'prediction': final_prediction.values.tolist(), 'mse': mse,
                                            'rmse': rmse, 'mae': mae },
                                  'accuracy': {'prediction vs close': accuracy_1, 'prediction vs prediction': accuracy_2}}

                # save simulation data here as checkpoint
                with open('simulation8_data.json', 'w') as fp:
                    json.dump(simulation8, fp)

                for ma in simulation8.keys():
                    print('\n' + ma)
                    print('Prediction vs Close:\t\t' + str(round(100*simulation8[ma]['accuracy']['prediction vs close'], 2))
                          + '% Accuracy')
                    print('Prediction vs Prediction:\t' + str(round(100*simulation8[ma]['accuracy']['prediction vs prediction'], 2))
                          + '% Accuracy')
                    print('MSE:\t', simulation8[ma]['final']['mse'],
                          '\nRMSE:\t', simulation8[ma]['final']['rmse'],
                          '\nMAPE:\t', simulation8[ma]['final']['mae'])#,
                          # '\nMAPE:\t', simulation[ma]['final']['mape'])
              # else:
              #   break
    elapsed = timeit.default_timer() - start_time
    print('Runtime: mins:',elapsed/60)
SMA
SMA([input_arrays], [timeperiod=30])

Simple Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
17

Working on SMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-15000.708, Time=9.21 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-13492.284, Time=2.24 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-15827.971, Time=8.03 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-13635.197, Time=10.27 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=-14132.778, Time=3.58 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=-15140.312, Time=9.97 sec
 ARIMA(1,3,0)(0,0,0)[0] intercept   : AIC=-13970.469, Time=7.12 sec

Best model:  ARIMA(1,3,0)(0,0,0)[0]          
Total fit time: 50.431 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(1, 3, 0)   Log Likelihood                7936.985
Date:                Sun, 12 Dec 2021   AIC                         -15827.971
Time:                        15:53:06   BIC                         -15720.081
Sample:                             0   HQIC                        -15786.537
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1         -4.786e-05      0.001     -0.066      0.947      -0.001       0.001
x2         -4.789e-05      0.001     -0.085      0.932      -0.001       0.001
x3         -4.819e-05      0.000     -0.105      0.917      -0.001       0.001
x4             1.0000      0.001   1557.248      0.000       0.999       1.001
x5         -4.579e-05      0.001     -0.071      0.943      -0.001       0.001
x6          -5.16e-05      0.000     -0.432      0.666      -0.000       0.000
x7         -4.778e-05      0.000     -0.278      0.781      -0.000       0.000
x8            -0.0012      0.000     -7.403      0.000      -0.002      -0.001
x9         -3.454e-06      0.002     -0.002      0.998      -0.003       0.003
x10           -0.0005      0.001     -0.403      0.687      -0.003       0.002
x11            0.0029      0.000     10.904      0.000       0.002       0.003
x12           -0.0003      0.000     -1.815      0.069      -0.001    2.06e-05
x13        -4.809e-05      0.000     -0.157      0.875      -0.001       0.001
x14           -0.0001      0.000     -0.482      0.630      -0.001       0.000
x15        -5.214e-05      0.000     -0.273      0.785      -0.000       0.000
x16        -4.468e-05      0.000     -0.125      0.901      -0.001       0.001
x17        -4.224e-05      0.000     -0.202      0.840      -0.000       0.000
x18        -8.086e-05      0.000     -0.270      0.787      -0.001       0.001
x19        -5.537e-05      0.000     -0.244      0.807      -0.000       0.000
x20         8.423e-05      0.000      0.333      0.739      -0.000       0.001
x21        -4.232e-05      0.000     -0.166      0.868      -0.001       0.000
ar.L1         -0.6666   6.03e-06  -1.11e+05      0.000      -0.667      -0.667
sigma2      4.093e-10   8.97e-11      4.563      0.000    2.33e-10    5.85e-10
===================================================================================
Ljung-Box (L1) (Q):                  60.24   Jarque-Bera (JB):           1334882.31
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.11   Skew:                            -3.81
Prob(H) (two-sided):                  0.00   Kurtosis:                       202.35
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 5.73e+20. Standard errors may be unstable.
ARIMA order: (1, 3, 0) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.04321, saving model to LSTM8.h5
48/48 - 3s - loss: 1.3590 - val_loss: 0.0432 - lr: 0.0010 - 3s/epoch - 69ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.04321
48/48 - 0s - loss: 1.2809 - val_loss: 0.0457 - lr: 0.0010 - 215ms/epoch - 4ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.04321
48/48 - 0s - loss: 1.2036 - val_loss: 0.0497 - lr: 0.0010 - 228ms/epoch - 5ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.04321
48/48 - 0s - loss: 1.1161 - val_loss: 0.0545 - lr: 0.0010 - 221ms/epoch - 5ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.04321
48/48 - 0s - loss: 1.0402 - val_loss: 0.0591 - lr: 0.0010 - 223ms/epoch - 5ms/step
Epoch 6/500

Epoch 00006: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00006: val_loss did not improve from 0.04321
48/48 - 0s - loss: 0.9788 - val_loss: 0.0637 - lr: 0.0010 - 226ms/epoch - 5ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.04321
48/48 - 0s - loss: 0.9461 - val_loss: 0.0642 - lr: 1.0000e-04 - 228ms/epoch - 5ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.04321
48/48 - 0s - loss: 0.9414 - val_loss: 0.0646 - lr: 1.0000e-04 - 215ms/epoch - 4ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.04321
48/48 - 0s - loss: 0.9368 - val_loss: 0.0651 - lr: 1.0000e-04 - 227ms/epoch - 5ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.04321
48/48 - 0s - loss: 0.9323 - val_loss: 0.0656 - lr: 1.0000e-04 - 215ms/epoch - 4ms/step
Epoch 11/500

Epoch 00011: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00011: val_loss did not improve from 0.04321
48/48 - 0s - loss: 0.9278 - val_loss: 0.0662 - lr: 1.0000e-04 - 218ms/epoch - 5ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.04321
48/48 - 0s - loss: 0.9250 - val_loss: 0.0662 - lr: 1.0000e-05 - 225ms/epoch - 5ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.04321
48/48 - 0s - loss: 0.9246 - val_loss: 0.0663 - lr: 1.0000e-05 - 223ms/epoch - 5ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.04321
48/48 - 0s - loss: 0.9242 - val_loss: 0.0663 - lr: 1.0000e-05 - 235ms/epoch - 5ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.04321
48/48 - 0s - loss: 0.9237 - val_loss: 0.0664 - lr: 1.0000e-05 - 219ms/epoch - 5ms/step
Epoch 16/500

Epoch 00016: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00016: val_loss did not improve from 0.04321
48/48 - 0s - loss: 0.9233 - val_loss: 0.0664 - lr: 1.0000e-05 - 234ms/epoch - 5ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.04321
48/48 - 0s - loss: 0.9228 - val_loss: 0.0665 - lr: 1.0000e-05 - 221ms/epoch - 5ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.04321
48/48 - 0s - loss: 0.9224 - val_loss: 0.0666 - lr: 1.0000e-05 - 227ms/epoch - 5ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.04321
48/48 - 0s - loss: 0.9219 - val_loss: 0.0666 - lr: 1.0000e-05 - 214ms/epoch - 4ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.04321
48/48 - 0s - loss: 0.9215 - val_loss: 0.0667 - lr: 1.0000e-05 - 220ms/epoch - 5ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.04321
48/48 - 0s - loss: 0.9210 - val_loss: 0.0668 - lr: 1.0000e-05 - 220ms/epoch - 5ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.04321
48/48 - 0s - loss: 0.9206 - val_loss: 0.0668 - lr: 1.0000e-05 - 224ms/epoch - 5ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.04321
48/48 - 0s - loss: 0.9201 - val_loss: 0.0669 - lr: 1.0000e-05 - 236ms/epoch - 5ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.04321
48/48 - 0s - loss: 0.9197 - val_loss: 0.0670 - lr: 1.0000e-05 - 229ms/epoch - 5ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.04321
48/48 - 0s - loss: 0.9192 - val_loss: 0.0670 - lr: 1.0000e-05 - 231ms/epoch - 5ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.04321
48/48 - 0s - loss: 0.9188 - val_loss: 0.0671 - lr: 1.0000e-05 - 215ms/epoch - 4ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.04321
48/48 - 0s - loss: 0.9183 - val_loss: 0.0672 - lr: 1.0000e-05 - 217ms/epoch - 5ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.04321
48/48 - 0s - loss: 0.9179 - val_loss: 0.0673 - lr: 1.0000e-05 - 220ms/epoch - 5ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.04321
48/48 - 0s - loss: 0.9174 - val_loss: 0.0673 - lr: 1.0000e-05 - 220ms/epoch - 5ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.04321
48/48 - 0s - loss: 0.9170 - val_loss: 0.0674 - lr: 1.0000e-05 - 214ms/epoch - 4ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.04321
48/48 - 0s - loss: 0.9165 - val_loss: 0.0675 - lr: 1.0000e-05 - 227ms/epoch - 5ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.04321
48/48 - 0s - loss: 0.9161 - val_loss: 0.0676 - lr: 1.0000e-05 - 219ms/epoch - 5ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.04321
48/48 - 0s - loss: 0.9156 - val_loss: 0.0676 - lr: 1.0000e-05 - 234ms/epoch - 5ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.04321
48/48 - 0s - loss: 0.9152 - val_loss: 0.0677 - lr: 1.0000e-05 - 219ms/epoch - 5ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.04321
48/48 - 0s - loss: 0.9147 - val_loss: 0.0678 - lr: 1.0000e-05 - 224ms/epoch - 5ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.04321
48/48 - 0s - loss: 0.9143 - val_loss: 0.0679 - lr: 1.0000e-05 - 224ms/epoch - 5ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.04321
48/48 - 0s - loss: 0.9138 - val_loss: 0.0679 - lr: 1.0000e-05 - 220ms/epoch - 5ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.04321
48/48 - 0s - loss: 0.9134 - val_loss: 0.0680 - lr: 1.0000e-05 - 216ms/epoch - 5ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.04321
48/48 - 0s - loss: 0.9129 - val_loss: 0.0681 - lr: 1.0000e-05 - 222ms/epoch - 5ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.04321
48/48 - 0s - loss: 0.9125 - val_loss: 0.0682 - lr: 1.0000e-05 - 214ms/epoch - 4ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.04321
48/48 - 0s - loss: 0.9120 - val_loss: 0.0683 - lr: 1.0000e-05 - 224ms/epoch - 5ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.04321
48/48 - 0s - loss: 0.9115 - val_loss: 0.0684 - lr: 1.0000e-05 - 241ms/epoch - 5ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.04321
48/48 - 0s - loss: 0.9111 - val_loss: 0.0684 - lr: 1.0000e-05 - 223ms/epoch - 5ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.04321
48/48 - 0s - loss: 0.9106 - val_loss: 0.0685 - lr: 1.0000e-05 - 221ms/epoch - 5ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.04321
48/48 - 0s - loss: 0.9102 - val_loss: 0.0686 - lr: 1.0000e-05 - 226ms/epoch - 5ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.04321
48/48 - 0s - loss: 0.9098 - val_loss: 0.0687 - lr: 1.0000e-05 - 220ms/epoch - 5ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.04321
48/48 - 0s - loss: 0.9093 - val_loss: 0.0688 - lr: 1.0000e-05 - 222ms/epoch - 5ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.04321
48/48 - 0s - loss: 0.9089 - val_loss: 0.0689 - lr: 1.0000e-05 - 220ms/epoch - 5ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.04321
48/48 - 0s - loss: 0.9084 - val_loss: 0.0690 - lr: 1.0000e-05 - 219ms/epoch - 5ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.04321
48/48 - 0s - loss: 0.9080 - val_loss: 0.0691 - lr: 1.0000e-05 - 218ms/epoch - 5ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.04321
48/48 - 0s - loss: 0.9075 - val_loss: 0.0691 - lr: 1.0000e-05 - 217ms/epoch - 5ms/step
Epoch 00051: early stopping
SMA
Prediction vs Close:		54.85% Accuracy
Prediction vs Prediction:	47.01% Accuracy
MSE:	 27.593274833467863 
RMSE:	 5.252930118844897 
MAPE:	 4.117405060238624
EMA
EMA([input_arrays], [timeperiod=30])

Exponential Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
51

Working on EMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-17005.775, Time=2.28 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-14574.593, Time=3.86 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-16801.081, Time=8.74 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-14572.593, Time=5.08 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=-14532.068, Time=7.00 sec
 ARIMA(1,3,2)(0,0,0)[0]             : AIC=-15610.472, Time=11.60 sec
 ARIMA(0,3,2)(0,0,0)[0]             : AIC=-16103.302, Time=12.94 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=-17030.021, Time=4.12 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=-17004.614, Time=2.92 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=-17087.441, Time=5.84 sec
 ARIMA(3,3,2)(0,0,0)[0]             : AIC=inf, Time=17.14 sec
/usr/local/lib/python3.7/dist-packages/statsmodels/tsa/statespace/sarimax.py:1906: RuntimeWarning: divide by zero encountered in reciprocal
  return np.roots(self.polynomial_reduced_ma)**-1
 ARIMA(2,3,2)(0,0,0)[0]             : AIC=-17003.984, Time=3.03 sec
 ARIMA(3,3,1)(0,0,0)[0] intercept   : AIC=-16998.666, Time=3.23 sec

Best model:  ARIMA(3,3,1)(0,0,0)[0]          
Total fit time: 87.797 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 1)   Log Likelihood                8569.720
Date:                Sun, 12 Dec 2021   AIC                         -17087.441
Time:                        15:55:29   BIC                         -16965.479
Sample:                             0   HQIC                        -17040.602
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1         -2.316e-10   9.87e-21  -2.35e+10      0.000   -2.32e-10   -2.32e-10
x2         -2.308e-10   9.85e-21  -2.34e+10      0.000   -2.31e-10   -2.31e-10
x3         -2.324e-10   9.88e-21  -2.35e+10      0.000   -2.32e-10   -2.32e-10
x4             1.0000   9.87e-21   1.01e+20      0.000       1.000       1.000
x5         -2.106e-10   9.41e-21  -2.24e+10      0.000   -2.11e-10   -2.11e-10
x6         -7.996e-10   1.74e-20  -4.59e+10      0.000      -8e-10      -8e-10
x7         -2.295e-10   9.82e-21  -2.34e+10      0.000   -2.29e-10   -2.29e-10
x8         -2.244e-10   9.71e-21  -2.31e+10      0.000   -2.24e-10   -2.24e-10
x9         -1.166e-11   1.98e-21   -5.9e+09      0.000   -1.17e-11   -1.17e-11
x10        -4.453e-11   4.22e-21  -1.06e+10      0.000   -4.45e-11   -4.45e-11
x11        -2.219e-10   9.65e-21   -2.3e+10      0.000   -2.22e-10   -2.22e-10
x12        -2.264e-10   9.76e-21  -2.32e+10      0.000   -2.26e-10   -2.26e-10
x13        -2.315e-10   9.87e-21  -2.35e+10      0.000   -2.31e-10   -2.31e-10
x14        -1.766e-09   2.73e-20  -6.48e+10      0.000   -1.77e-09   -1.77e-09
x15        -2.167e-10   9.37e-21  -2.31e+10      0.000   -2.17e-10   -2.17e-10
x16        -5.232e-10   1.49e-20  -3.52e+10      0.000   -5.23e-10   -5.23e-10
x17        -2.147e-10   9.48e-21  -2.27e+10      0.000   -2.15e-10   -2.15e-10
x18        -3.791e-11   3.96e-21  -9.56e+09      0.000   -3.79e-11   -3.79e-11
x19        -2.597e-10   1.05e-20  -2.48e+10      0.000    -2.6e-10    -2.6e-10
x20        -2.417e-10      1e-20  -2.41e+10      0.000   -2.42e-10   -2.42e-10
x21        -4.823e-10    1.4e-20  -3.44e+10      0.000   -4.82e-10   -4.82e-10
ar.L1         -0.4923   1.46e-22  -3.38e+21      0.000      -0.492      -0.492
ar.L2         -0.1923   8.47e-23  -2.27e+21      0.000      -0.192      -0.192
ar.L3         -0.0462   4.02e-23  -1.15e+21      0.000      -0.046      -0.046
ma.L1         -0.7077   3.31e-22  -2.14e+21      0.000      -0.708      -0.708
sigma2       8.99e-11   6.95e-11      1.293      0.196   -4.64e-11    2.26e-10
===================================================================================
Ljung-Box (L1) (Q):                  54.09   Jarque-Bera (JB):           4207353.17
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                             5.48
Prob(H) (two-sided):                  0.00   Kurtosis:                       357.00
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 3.15e+40. Standard errors may be unstable.
ARIMA order: (3, 3, 1) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.05220, saving model to LSTM8.h5
16/16 - 4s - loss: 1.3970 - val_loss: 0.0522 - lr: 0.0010 - 4s/epoch - 233ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.05220
16/16 - 0s - loss: 1.3477 - val_loss: 0.0530 - lr: 0.0010 - 86ms/epoch - 5ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.05220
16/16 - 0s - loss: 1.2993 - val_loss: 0.0538 - lr: 0.0010 - 87ms/epoch - 5ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.05220
16/16 - 0s - loss: 1.2538 - val_loss: 0.0549 - lr: 0.0010 - 91ms/epoch - 6ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.05220
16/16 - 0s - loss: 1.2129 - val_loss: 0.0564 - lr: 0.0010 - 94ms/epoch - 6ms/step
Epoch 6/500

Epoch 00006: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00006: val_loss did not improve from 0.05220
16/16 - 0s - loss: 1.1761 - val_loss: 0.0581 - lr: 0.0010 - 92ms/epoch - 6ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.05220
16/16 - 0s - loss: 1.1538 - val_loss: 0.0583 - lr: 1.0000e-04 - 89ms/epoch - 6ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.05220
16/16 - 0s - loss: 1.1505 - val_loss: 0.0584 - lr: 1.0000e-04 - 94ms/epoch - 6ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.05220
16/16 - 0s - loss: 1.1473 - val_loss: 0.0586 - lr: 1.0000e-04 - 83ms/epoch - 5ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.05220
16/16 - 0s - loss: 1.1441 - val_loss: 0.0588 - lr: 1.0000e-04 - 89ms/epoch - 6ms/step
Epoch 11/500

Epoch 00011: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00011: val_loss did not improve from 0.05220
16/16 - 0s - loss: 1.1409 - val_loss: 0.0589 - lr: 1.0000e-04 - 82ms/epoch - 5ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.05220
16/16 - 0s - loss: 1.1388 - val_loss: 0.0589 - lr: 1.0000e-05 - 87ms/epoch - 5ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.05220
16/16 - 0s - loss: 1.1385 - val_loss: 0.0590 - lr: 1.0000e-05 - 84ms/epoch - 5ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.05220
16/16 - 0s - loss: 1.1382 - val_loss: 0.0590 - lr: 1.0000e-05 - 87ms/epoch - 5ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.05220
16/16 - 0s - loss: 1.1379 - val_loss: 0.0590 - lr: 1.0000e-05 - 90ms/epoch - 6ms/step
Epoch 16/500

Epoch 00016: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00016: val_loss did not improve from 0.05220
16/16 - 0s - loss: 1.1376 - val_loss: 0.0590 - lr: 1.0000e-05 - 92ms/epoch - 6ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.05220
16/16 - 0s - loss: 1.1373 - val_loss: 0.0590 - lr: 1.0000e-05 - 91ms/epoch - 6ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.05220
16/16 - 0s - loss: 1.1369 - val_loss: 0.0590 - lr: 1.0000e-05 - 99ms/epoch - 6ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.05220
16/16 - 0s - loss: 1.1366 - val_loss: 0.0591 - lr: 1.0000e-05 - 95ms/epoch - 6ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.05220
16/16 - 0s - loss: 1.1363 - val_loss: 0.0591 - lr: 1.0000e-05 - 90ms/epoch - 6ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.05220
16/16 - 0s - loss: 1.1360 - val_loss: 0.0591 - lr: 1.0000e-05 - 96ms/epoch - 6ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.05220
16/16 - 0s - loss: 1.1357 - val_loss: 0.0591 - lr: 1.0000e-05 - 85ms/epoch - 5ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.05220
16/16 - 0s - loss: 1.1353 - val_loss: 0.0591 - lr: 1.0000e-05 - 86ms/epoch - 5ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.05220
16/16 - 0s - loss: 1.1350 - val_loss: 0.0591 - lr: 1.0000e-05 - 87ms/epoch - 5ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.05220
16/16 - 0s - loss: 1.1347 - val_loss: 0.0592 - lr: 1.0000e-05 - 84ms/epoch - 5ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.05220
16/16 - 0s - loss: 1.1344 - val_loss: 0.0592 - lr: 1.0000e-05 - 87ms/epoch - 5ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.05220
16/16 - 0s - loss: 1.1340 - val_loss: 0.0592 - lr: 1.0000e-05 - 90ms/epoch - 6ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.05220
16/16 - 0s - loss: 1.1337 - val_loss: 0.0592 - lr: 1.0000e-05 - 98ms/epoch - 6ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.05220
16/16 - 0s - loss: 1.1334 - val_loss: 0.0592 - lr: 1.0000e-05 - 94ms/epoch - 6ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.05220
16/16 - 0s - loss: 1.1331 - val_loss: 0.0592 - lr: 1.0000e-05 - 89ms/epoch - 6ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.05220
16/16 - 0s - loss: 1.1328 - val_loss: 0.0593 - lr: 1.0000e-05 - 86ms/epoch - 5ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.05220
16/16 - 0s - loss: 1.1324 - val_loss: 0.0593 - lr: 1.0000e-05 - 87ms/epoch - 5ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.05220
16/16 - 0s - loss: 1.1321 - val_loss: 0.0593 - lr: 1.0000e-05 - 84ms/epoch - 5ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.05220
16/16 - 0s - loss: 1.1318 - val_loss: 0.0593 - lr: 1.0000e-05 - 90ms/epoch - 6ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.05220
16/16 - 0s - loss: 1.1315 - val_loss: 0.0593 - lr: 1.0000e-05 - 90ms/epoch - 6ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.05220
16/16 - 0s - loss: 1.1311 - val_loss: 0.0593 - lr: 1.0000e-05 - 83ms/epoch - 5ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.05220
16/16 - 0s - loss: 1.1308 - val_loss: 0.0593 - lr: 1.0000e-05 - 86ms/epoch - 5ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.05220
16/16 - 0s - loss: 1.1305 - val_loss: 0.0594 - lr: 1.0000e-05 - 90ms/epoch - 6ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.05220
16/16 - 0s - loss: 1.1302 - val_loss: 0.0594 - lr: 1.0000e-05 - 93ms/epoch - 6ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.05220
16/16 - 0s - loss: 1.1298 - val_loss: 0.0594 - lr: 1.0000e-05 - 90ms/epoch - 6ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.05220
16/16 - 0s - loss: 1.1295 - val_loss: 0.0594 - lr: 1.0000e-05 - 94ms/epoch - 6ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.05220
16/16 - 0s - loss: 1.1292 - val_loss: 0.0594 - lr: 1.0000e-05 - 87ms/epoch - 5ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.05220
16/16 - 0s - loss: 1.1289 - val_loss: 0.0594 - lr: 1.0000e-05 - 96ms/epoch - 6ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.05220
16/16 - 0s - loss: 1.1285 - val_loss: 0.0595 - lr: 1.0000e-05 - 89ms/epoch - 6ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.05220
16/16 - 0s - loss: 1.1282 - val_loss: 0.0595 - lr: 1.0000e-05 - 86ms/epoch - 5ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.05220
16/16 - 0s - loss: 1.1279 - val_loss: 0.0595 - lr: 1.0000e-05 - 90ms/epoch - 6ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.05220
16/16 - 0s - loss: 1.1275 - val_loss: 0.0595 - lr: 1.0000e-05 - 85ms/epoch - 5ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.05220
16/16 - 0s - loss: 1.1272 - val_loss: 0.0595 - lr: 1.0000e-05 - 89ms/epoch - 6ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.05220
16/16 - 0s - loss: 1.1269 - val_loss: 0.0595 - lr: 1.0000e-05 - 94ms/epoch - 6ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.05220
16/16 - 0s - loss: 1.1266 - val_loss: 0.0596 - lr: 1.0000e-05 - 85ms/epoch - 5ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.05220
16/16 - 0s - loss: 1.1262 - val_loss: 0.0596 - lr: 1.0000e-05 - 91ms/epoch - 6ms/step
Epoch 00051: early stopping
SMA
Prediction vs Close:		54.85% Accuracy
Prediction vs Prediction:	47.01% Accuracy
MSE:	 27.593274833467863 
RMSE:	 5.252930118844897 
MAPE:	 4.117405060238624

EMA
Prediction vs Close:		54.85% Accuracy
Prediction vs Prediction:	49.63% Accuracy
MSE:	 36.00024912834101 
RMSE:	 6.000020760659167 
MAPE:	 4.70426166066095
WMA
WMA([input_arrays], [timeperiod=30])

Weighted Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
49

Working on WMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-14480.432, Time=9.06 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-15747.905, Time=6.31 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-15116.389, Time=6.99 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-13532.115, Time=7.99 sec
 ARIMA(0,3,0)(0,0,0)[0] intercept   : AIC=-13619.624, Time=5.40 sec

Best model:  ARIMA(0,3,0)(0,0,0)[0]          
Total fit time: 35.768 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(0, 3, 0)   Log Likelihood                7895.952
Date:                Sun, 12 Dec 2021   AIC                         -15747.905
Time:                        16:03:44   BIC                         -15644.706
Sample:                             0   HQIC                        -15708.272
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1          3.384e-05    1.9e-05      1.778      0.075   -3.47e-06    7.12e-05
x2          3.379e-05   1.84e-05      1.832      0.067   -2.35e-06    6.99e-05
x3          3.388e-05   4.34e-05      0.781      0.435   -5.12e-05       0.000
x4             1.0000   4.12e-06   2.43e+05      0.000       1.000       1.000
x5          3.227e-05   3.52e-06      9.163      0.000    2.54e-05    3.92e-05
x6          5.559e-05   6.75e-05      0.823      0.410   -7.67e-05       0.000
x7          3.369e-05   2.38e-05      1.415      0.157    -1.3e-05    8.03e-05
x8             0.0023    2.6e-05     86.661      0.000       0.002       0.002
x9          -8.72e-06   7.51e-07    -11.610      0.000   -1.02e-05   -7.25e-06
x10           -0.0023   3.33e-05    -67.770      0.000      -0.002      -0.002
x11            0.0093    2.8e-05    333.459      0.000       0.009       0.009
x12           -0.0118   2.37e-05   -498.171      0.000      -0.012      -0.012
x13         3.382e-05   1.49e-05      2.273      0.023    4.66e-06     6.3e-05
x14         9.271e-05   6.21e-05      1.493      0.135    -2.9e-05       0.000
x15         3.096e-05   1.92e-05      1.614      0.106   -6.63e-06    6.86e-05
x16          5.52e-05   7.17e-05      0.770      0.441   -8.53e-05       0.000
x17          3.38e-05    3.2e-05      1.056      0.291   -2.89e-05    9.65e-05
x18        -6.715e-06   8.34e-05     -0.081      0.936      -0.000       0.000
x19         3.428e-05   2.07e-05      1.654      0.098   -6.34e-06    7.49e-05
x20        -8.089e-06   9.55e-05     -0.085      0.933      -0.000       0.000
x21         4.255e-05      0.000      0.094      0.925      -0.001       0.001
sigma2      2.581e-10   7.87e-11      3.280      0.001    1.04e-10    4.12e-10
===================================================================================
Ljung-Box (L1) (Q):                 362.92   Jarque-Bera (JB):           5047564.68
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.01   Skew:                           -11.23
Prob(H) (two-sided):                  0.00   Kurtosis:                       390.28
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 1.75e+20. Standard errors may be unstable.
ARIMA order: (0, 3, 0) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.05515, saving model to LSTM8.h5
17/17 - 4s - loss: 1.3965 - val_loss: 0.0551 - lr: 0.0010 - 4s/epoch - 221ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.05515
17/17 - 0s - loss: 1.3184 - val_loss: 0.0556 - lr: 0.0010 - 90ms/epoch - 5ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.05515
17/17 - 0s - loss: 1.2462 - val_loss: 0.0561 - lr: 0.0010 - 90ms/epoch - 5ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.05515
17/17 - 0s - loss: 1.1850 - val_loss: 0.0570 - lr: 0.0010 - 99ms/epoch - 6ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.05515
17/17 - 0s - loss: 1.1360 - val_loss: 0.0583 - lr: 0.0010 - 90ms/epoch - 5ms/step
Epoch 6/500

Epoch 00006: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00006: val_loss did not improve from 0.05515
17/17 - 0s - loss: 1.0933 - val_loss: 0.0598 - lr: 0.0010 - 94ms/epoch - 6ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.05515
17/17 - 0s - loss: 1.0660 - val_loss: 0.0600 - lr: 1.0000e-04 - 96ms/epoch - 6ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.05515
17/17 - 0s - loss: 1.0620 - val_loss: 0.0601 - lr: 1.0000e-04 - 93ms/epoch - 5ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.05515
17/17 - 0s - loss: 1.0580 - val_loss: 0.0603 - lr: 1.0000e-04 - 94ms/epoch - 6ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.05515
17/17 - 0s - loss: 1.0541 - val_loss: 0.0605 - lr: 1.0000e-04 - 95ms/epoch - 6ms/step
Epoch 11/500

Epoch 00011: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00011: val_loss did not improve from 0.05515
17/17 - 0s - loss: 1.0503 - val_loss: 0.0606 - lr: 1.0000e-04 - 94ms/epoch - 6ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.05515
17/17 - 0s - loss: 1.0477 - val_loss: 0.0607 - lr: 1.0000e-05 - 87ms/epoch - 5ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.05515
17/17 - 0s - loss: 1.0473 - val_loss: 0.0607 - lr: 1.0000e-05 - 109ms/epoch - 6ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.05515
17/17 - 0s - loss: 1.0469 - val_loss: 0.0607 - lr: 1.0000e-05 - 93ms/epoch - 5ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.05515
17/17 - 0s - loss: 1.0465 - val_loss: 0.0607 - lr: 1.0000e-05 - 87ms/epoch - 5ms/step
Epoch 16/500

Epoch 00016: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00016: val_loss did not improve from 0.05515
17/17 - 0s - loss: 1.0462 - val_loss: 0.0607 - lr: 1.0000e-05 - 94ms/epoch - 6ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.05515
17/17 - 0s - loss: 1.0458 - val_loss: 0.0607 - lr: 1.0000e-05 - 98ms/epoch - 6ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.05515
17/17 - 0s - loss: 1.0454 - val_loss: 0.0608 - lr: 1.0000e-05 - 90ms/epoch - 5ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.05515
17/17 - 0s - loss: 1.0450 - val_loss: 0.0608 - lr: 1.0000e-05 - 92ms/epoch - 5ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.05515
17/17 - 0s - loss: 1.0446 - val_loss: 0.0608 - lr: 1.0000e-05 - 89ms/epoch - 5ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.05515
17/17 - 0s - loss: 1.0443 - val_loss: 0.0608 - lr: 1.0000e-05 - 88ms/epoch - 5ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.05515
17/17 - 0s - loss: 1.0439 - val_loss: 0.0608 - lr: 1.0000e-05 - 99ms/epoch - 6ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.05515
17/17 - 0s - loss: 1.0435 - val_loss: 0.0608 - lr: 1.0000e-05 - 92ms/epoch - 5ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.05515
17/17 - 0s - loss: 1.0431 - val_loss: 0.0609 - lr: 1.0000e-05 - 92ms/epoch - 5ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.05515
17/17 - 0s - loss: 1.0427 - val_loss: 0.0609 - lr: 1.0000e-05 - 93ms/epoch - 5ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.05515
17/17 - 0s - loss: 1.0423 - val_loss: 0.0609 - lr: 1.0000e-05 - 94ms/epoch - 6ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.05515
17/17 - 0s - loss: 1.0420 - val_loss: 0.0609 - lr: 1.0000e-05 - 99ms/epoch - 6ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.05515
17/17 - 0s - loss: 1.0416 - val_loss: 0.0609 - lr: 1.0000e-05 - 86ms/epoch - 5ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.05515
17/17 - 0s - loss: 1.0412 - val_loss: 0.0609 - lr: 1.0000e-05 - 94ms/epoch - 6ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.05515
17/17 - 0s - loss: 1.0408 - val_loss: 0.0610 - lr: 1.0000e-05 - 87ms/epoch - 5ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.05515
17/17 - 0s - loss: 1.0404 - val_loss: 0.0610 - lr: 1.0000e-05 - 95ms/epoch - 6ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.05515
17/17 - 0s - loss: 1.0400 - val_loss: 0.0610 - lr: 1.0000e-05 - 94ms/epoch - 6ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.05515
17/17 - 0s - loss: 1.0396 - val_loss: 0.0610 - lr: 1.0000e-05 - 95ms/epoch - 6ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.05515
17/17 - 0s - loss: 1.0393 - val_loss: 0.0610 - lr: 1.0000e-05 - 89ms/epoch - 5ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.05515
17/17 - 0s - loss: 1.0389 - val_loss: 0.0610 - lr: 1.0000e-05 - 92ms/epoch - 5ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.05515
17/17 - 0s - loss: 1.0385 - val_loss: 0.0611 - lr: 1.0000e-05 - 91ms/epoch - 5ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.05515
17/17 - 0s - loss: 1.0381 - val_loss: 0.0611 - lr: 1.0000e-05 - 95ms/epoch - 6ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.05515
17/17 - 0s - loss: 1.0377 - val_loss: 0.0611 - lr: 1.0000e-05 - 93ms/epoch - 5ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.05515
17/17 - 0s - loss: 1.0373 - val_loss: 0.0611 - lr: 1.0000e-05 - 88ms/epoch - 5ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.05515
17/17 - 0s - loss: 1.0369 - val_loss: 0.0611 - lr: 1.0000e-05 - 90ms/epoch - 5ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.05515
17/17 - 0s - loss: 1.0366 - val_loss: 0.0612 - lr: 1.0000e-05 - 95ms/epoch - 6ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.05515
17/17 - 0s - loss: 1.0362 - val_loss: 0.0612 - lr: 1.0000e-05 - 96ms/epoch - 6ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.05515
17/17 - 0s - loss: 1.0358 - val_loss: 0.0612 - lr: 1.0000e-05 - 99ms/epoch - 6ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.05515
17/17 - 0s - loss: 1.0354 - val_loss: 0.0612 - lr: 1.0000e-05 - 96ms/epoch - 6ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.05515
17/17 - 0s - loss: 1.0350 - val_loss: 0.0612 - lr: 1.0000e-05 - 99ms/epoch - 6ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.05515
17/17 - 0s - loss: 1.0346 - val_loss: 0.0612 - lr: 1.0000e-05 - 101ms/epoch - 6ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.05515
17/17 - 0s - loss: 1.0343 - val_loss: 0.0613 - lr: 1.0000e-05 - 93ms/epoch - 5ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.05515
17/17 - 0s - loss: 1.0339 - val_loss: 0.0613 - lr: 1.0000e-05 - 99ms/epoch - 6ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.05515
17/17 - 0s - loss: 1.0335 - val_loss: 0.0613 - lr: 1.0000e-05 - 96ms/epoch - 6ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.05515
17/17 - 0s - loss: 1.0331 - val_loss: 0.0613 - lr: 1.0000e-05 - 93ms/epoch - 5ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.05515
17/17 - 0s - loss: 1.0327 - val_loss: 0.0613 - lr: 1.0000e-05 - 97ms/epoch - 6ms/step
Epoch 00051: early stopping
SMA
Prediction vs Close:		54.85% Accuracy
Prediction vs Prediction:	47.01% Accuracy
MSE:	 27.593274833467863 
RMSE:	 5.252930118844897 
MAPE:	 4.117405060238624

EMA
Prediction vs Close:		54.85% Accuracy
Prediction vs Prediction:	49.63% Accuracy
MSE:	 36.00024912834101 
RMSE:	 6.000020760659167 
MAPE:	 4.70426166066095

WMA
Prediction vs Close:		54.48% Accuracy
Prediction vs Prediction:	46.27% Accuracy
MSE:	 41.43257052147296 
RMSE:	 6.4368136932393005 
MAPE:	 5.073793611210466
DEMA
DEMA([input_arrays], [timeperiod=30])

Double Exponential Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
89

Working on DEMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-17005.774, Time=2.25 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-14574.593, Time=3.92 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-15590.302, Time=6.94 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-14572.593, Time=5.27 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=-15269.503, Time=6.70 sec
 ARIMA(1,3,2)(0,0,0)[0]             : AIC=-16414.961, Time=8.36 sec
 ARIMA(0,3,2)(0,0,0)[0]             : AIC=-16878.396, Time=9.51 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=-17030.019, Time=4.41 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=-17004.613, Time=3.00 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=-17087.441, Time=6.15 sec
 ARIMA(3,3,2)(0,0,0)[0]             : AIC=inf, Time=14.61 sec
/usr/local/lib/python3.7/dist-packages/statsmodels/tsa/statespace/sarimax.py:1906: RuntimeWarning: divide by zero encountered in reciprocal
  return np.roots(self.polynomial_reduced_ma)**-1
 ARIMA(2,3,2)(0,0,0)[0]             : AIC=-17003.985, Time=3.09 sec
 ARIMA(3,3,1)(0,0,0)[0] intercept   : AIC=-16998.665, Time=3.57 sec

Best model:  ARIMA(3,3,1)(0,0,0)[0]          
Total fit time: 77.793 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 1)   Log Likelihood                8569.721
Date:                Sun, 12 Dec 2021   AIC                         -17087.441
Time:                        16:05:33   BIC                         -16965.479
Sample:                             0   HQIC                        -17040.603
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1         -2.799e-10   1.43e-20  -1.96e+10      0.000    -2.8e-10    -2.8e-10
x2         -2.817e-10   1.43e-20  -1.97e+10      0.000   -2.82e-10   -2.82e-10
x3         -2.805e-10   1.43e-20  -1.96e+10      0.000    -2.8e-10    -2.8e-10
x4             1.0000   1.43e-20      7e+19      0.000       1.000       1.000
x5         -2.597e-10   1.37e-20  -1.89e+10      0.000    -2.6e-10    -2.6e-10
x6         -1.388e-09   3.12e-20  -4.45e+10      0.000   -1.39e-09   -1.39e-09
x7         -2.789e-10   1.42e-20  -1.96e+10      0.000   -2.79e-10   -2.79e-10
x8          -2.76e-10   1.42e-20  -1.95e+10      0.000   -2.76e-10   -2.76e-10
x9         -2.216e-12   3.53e-22  -6.28e+09      0.000   -2.22e-12   -2.22e-12
x10        -1.345e-10   9.82e-21  -1.37e+10      0.000   -1.34e-10   -1.34e-10
x11        -2.898e-10   1.45e-20     -2e+10      0.000    -2.9e-10    -2.9e-10
x12        -2.602e-10   1.38e-20  -1.89e+10      0.000    -2.6e-10    -2.6e-10
x13        -2.807e-10   1.43e-20  -1.96e+10      0.000   -2.81e-10   -2.81e-10
x14         -1.87e-09   3.69e-20  -5.07e+10      0.000   -1.87e-09   -1.87e-09
x15        -2.726e-10   1.43e-20   -1.9e+10      0.000   -2.73e-10   -2.73e-10
x16        -7.915e-11   7.68e-21  -1.03e+10      0.000   -7.92e-11   -7.92e-11
x17        -2.606e-10   1.33e-20  -1.96e+10      0.000   -2.61e-10   -2.61e-10
x18        -6.408e-10   2.16e-20  -2.97e+10      0.000   -6.41e-10   -6.41e-10
x19        -2.881e-10   1.46e-20  -1.98e+10      0.000   -2.88e-10   -2.88e-10
x20        -4.337e-10   1.78e-20  -2.44e+10      0.000   -4.34e-10   -4.34e-10
x21        -4.549e-10   1.79e-20  -2.55e+10      0.000   -4.55e-10   -4.55e-10
ar.L1         -0.4923   1.46e-22  -3.38e+21      0.000      -0.492      -0.492
ar.L2         -0.1923   8.47e-23  -2.27e+21      0.000      -0.192      -0.192
ar.L3         -0.0461   4.02e-23  -1.15e+21      0.000      -0.046      -0.046
ma.L1         -0.7078   3.31e-22  -2.14e+21      0.000      -0.708      -0.708
sigma2       8.99e-11   6.95e-11      1.293      0.196   -4.64e-11    2.26e-10
===================================================================================
Ljung-Box (L1) (Q):                  55.07   Jarque-Bera (JB):           4171695.82
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                             5.26
Prob(H) (two-sided):                  0.00   Kurtosis:                       355.51
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 2.62e+41. Standard errors may be unstable.
ARIMA order: (3, 3, 1) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.03617, saving model to LSTM8.h5
10/10 - 3s - loss: 1.4293 - val_loss: 0.0362 - lr: 0.0010 - 3s/epoch - 333ms/step
Epoch 2/500

Epoch 00002: val_loss improved from 0.03617 to 0.03594, saving model to LSTM8.h5
10/10 - 0s - loss: 1.3853 - val_loss: 0.0359 - lr: 0.0010 - 81ms/epoch - 8ms/step
Epoch 3/500

Epoch 00003: val_loss improved from 0.03594 to 0.03580, saving model to LSTM8.h5
10/10 - 0s - loss: 1.3458 - val_loss: 0.0358 - lr: 0.0010 - 73ms/epoch - 7ms/step
Epoch 4/500

Epoch 00004: val_loss improved from 0.03580 to 0.03575, saving model to LSTM8.h5
10/10 - 0s - loss: 1.3096 - val_loss: 0.0357 - lr: 0.0010 - 76ms/epoch - 8ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.03575
10/10 - 0s - loss: 1.2761 - val_loss: 0.0358 - lr: 0.0010 - 61ms/epoch - 6ms/step
Epoch 6/500

Epoch 00006: val_loss did not improve from 0.03575
10/10 - 0s - loss: 1.2446 - val_loss: 0.0359 - lr: 0.0010 - 57ms/epoch - 6ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.03575
10/10 - 0s - loss: 1.2145 - val_loss: 0.0361 - lr: 0.0010 - 60ms/epoch - 6ms/step
Epoch 8/500

Epoch 00008: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00008: val_loss did not improve from 0.03575
10/10 - 0s - loss: 1.1853 - val_loss: 0.0364 - lr: 0.0010 - 58ms/epoch - 6ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.03575
10/10 - 0s - loss: 1.1653 - val_loss: 0.0364 - lr: 1.0000e-04 - 63ms/epoch - 6ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.03575
10/10 - 0s - loss: 1.1625 - val_loss: 0.0364 - lr: 1.0000e-04 - 64ms/epoch - 6ms/step
Epoch 11/500

Epoch 00011: val_loss did not improve from 0.03575
10/10 - 0s - loss: 1.1596 - val_loss: 0.0365 - lr: 1.0000e-04 - 70ms/epoch - 7ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.03575
10/10 - 0s - loss: 1.1568 - val_loss: 0.0365 - lr: 1.0000e-04 - 66ms/epoch - 7ms/step
Epoch 13/500

Epoch 00013: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00013: val_loss did not improve from 0.03575
10/10 - 0s - loss: 1.1540 - val_loss: 0.0365 - lr: 1.0000e-04 - 66ms/epoch - 7ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.03575
10/10 - 0s - loss: 1.1520 - val_loss: 0.0365 - lr: 1.0000e-05 - 66ms/epoch - 7ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.03575
10/10 - 0s - loss: 1.1518 - val_loss: 0.0365 - lr: 1.0000e-05 - 66ms/epoch - 7ms/step
Epoch 16/500

Epoch 00016: val_loss did not improve from 0.03575
10/10 - 0s - loss: 1.1515 - val_loss: 0.0365 - lr: 1.0000e-05 - 59ms/epoch - 6ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.03575
10/10 - 0s - loss: 1.1512 - val_loss: 0.0366 - lr: 1.0000e-05 - 59ms/epoch - 6ms/step
Epoch 18/500

Epoch 00018: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00018: val_loss did not improve from 0.03575
10/10 - 0s - loss: 1.1509 - val_loss: 0.0366 - lr: 1.0000e-05 - 70ms/epoch - 7ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.03575
10/10 - 0s - loss: 1.1506 - val_loss: 0.0366 - lr: 1.0000e-05 - 61ms/epoch - 6ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.03575
10/10 - 0s - loss: 1.1503 - val_loss: 0.0366 - lr: 1.0000e-05 - 67ms/epoch - 7ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.03575
10/10 - 0s - loss: 1.1500 - val_loss: 0.0366 - lr: 1.0000e-05 - 58ms/epoch - 6ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.03575
10/10 - 0s - loss: 1.1498 - val_loss: 0.0366 - lr: 1.0000e-05 - 59ms/epoch - 6ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.03575
10/10 - 0s - loss: 1.1495 - val_loss: 0.0366 - lr: 1.0000e-05 - 62ms/epoch - 6ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.03575
10/10 - 0s - loss: 1.1492 - val_loss: 0.0366 - lr: 1.0000e-05 - 60ms/epoch - 6ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.03575
10/10 - 0s - loss: 1.1489 - val_loss: 0.0366 - lr: 1.0000e-05 - 66ms/epoch - 7ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.03575
10/10 - 0s - loss: 1.1486 - val_loss: 0.0366 - lr: 1.0000e-05 - 72ms/epoch - 7ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.03575
10/10 - 0s - loss: 1.1483 - val_loss: 0.0366 - lr: 1.0000e-05 - 68ms/epoch - 7ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.03575
10/10 - 0s - loss: 1.1480 - val_loss: 0.0366 - lr: 1.0000e-05 - 64ms/epoch - 6ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.03575
10/10 - 0s - loss: 1.1477 - val_loss: 0.0366 - lr: 1.0000e-05 - 63ms/epoch - 6ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.03575
10/10 - 0s - loss: 1.1474 - val_loss: 0.0366 - lr: 1.0000e-05 - 60ms/epoch - 6ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.03575
10/10 - 0s - loss: 1.1471 - val_loss: 0.0366 - lr: 1.0000e-05 - 68ms/epoch - 7ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.03575
10/10 - 0s - loss: 1.1468 - val_loss: 0.0366 - lr: 1.0000e-05 - 60ms/epoch - 6ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.03575
10/10 - 0s - loss: 1.1466 - val_loss: 0.0366 - lr: 1.0000e-05 - 61ms/epoch - 6ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.03575
10/10 - 0s - loss: 1.1463 - val_loss: 0.0366 - lr: 1.0000e-05 - 59ms/epoch - 6ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.03575
10/10 - 0s - loss: 1.1460 - val_loss: 0.0366 - lr: 1.0000e-05 - 60ms/epoch - 6ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.03575
10/10 - 0s - loss: 1.1457 - val_loss: 0.0366 - lr: 1.0000e-05 - 67ms/epoch - 7ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.03575
10/10 - 0s - loss: 1.1454 - val_loss: 0.0366 - lr: 1.0000e-05 - 60ms/epoch - 6ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.03575
10/10 - 0s - loss: 1.1451 - val_loss: 0.0366 - lr: 1.0000e-05 - 63ms/epoch - 6ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.03575
10/10 - 0s - loss: 1.1448 - val_loss: 0.0366 - lr: 1.0000e-05 - 63ms/epoch - 6ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.03575
10/10 - 0s - loss: 1.1445 - val_loss: 0.0366 - lr: 1.0000e-05 - 69ms/epoch - 7ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.03575
10/10 - 0s - loss: 1.1442 - val_loss: 0.0367 - lr: 1.0000e-05 - 68ms/epoch - 7ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.03575
10/10 - 0s - loss: 1.1439 - val_loss: 0.0367 - lr: 1.0000e-05 - 65ms/epoch - 6ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.03575
10/10 - 0s - loss: 1.1436 - val_loss: 0.0367 - lr: 1.0000e-05 - 63ms/epoch - 6ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.03575
10/10 - 0s - loss: 1.1433 - val_loss: 0.0367 - lr: 1.0000e-05 - 74ms/epoch - 7ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.03575
10/10 - 0s - loss: 1.1430 - val_loss: 0.0367 - lr: 1.0000e-05 - 67ms/epoch - 7ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.03575
10/10 - 0s - loss: 1.1427 - val_loss: 0.0367 - lr: 1.0000e-05 - 64ms/epoch - 6ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.03575
10/10 - 0s - loss: 1.1424 - val_loss: 0.0367 - lr: 1.0000e-05 - 68ms/epoch - 7ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.03575
10/10 - 0s - loss: 1.1421 - val_loss: 0.0367 - lr: 1.0000e-05 - 68ms/epoch - 7ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.03575
10/10 - 0s - loss: 1.1418 - val_loss: 0.0367 - lr: 1.0000e-05 - 63ms/epoch - 6ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.03575
10/10 - 0s - loss: 1.1416 - val_loss: 0.0367 - lr: 1.0000e-05 - 58ms/epoch - 6ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.03575
10/10 - 0s - loss: 1.1413 - val_loss: 0.0367 - lr: 1.0000e-05 - 61ms/epoch - 6ms/step
Epoch 52/500

Epoch 00052: val_loss did not improve from 0.03575
10/10 - 0s - loss: 1.1410 - val_loss: 0.0367 - lr: 1.0000e-05 - 62ms/epoch - 6ms/step
Epoch 53/500

Epoch 00053: val_loss did not improve from 0.03575
10/10 - 0s - loss: 1.1407 - val_loss: 0.0367 - lr: 1.0000e-05 - 59ms/epoch - 6ms/step
Epoch 54/500

Epoch 00054: val_loss did not improve from 0.03575
10/10 - 0s - loss: 1.1404 - val_loss: 0.0367 - lr: 1.0000e-05 - 61ms/epoch - 6ms/step
Epoch 00054: early stopping
SMA
Prediction vs Close:		54.85% Accuracy
Prediction vs Prediction:	47.01% Accuracy
MSE:	 27.593274833467863 
RMSE:	 5.252930118844897 
MAPE:	 4.117405060238624

EMA
Prediction vs Close:		54.85% Accuracy
Prediction vs Prediction:	49.63% Accuracy
MSE:	 36.00024912834101 
RMSE:	 6.000020760659167 
MAPE:	 4.70426166066095

WMA
Prediction vs Close:		54.48% Accuracy
Prediction vs Prediction:	46.27% Accuracy
MSE:	 41.43257052147296 
RMSE:	 6.4368136932393005 
MAPE:	 5.073793611210466

DEMA
Prediction vs Close:		52.24% Accuracy
Prediction vs Prediction:	45.15% Accuracy
MSE:	 181.24518187853474 
RMSE:	 13.462733076108087 
MAPE:	 12.35792879577932
KAMA
KAMA([input_arrays], [timeperiod=30])

Kaufman Adaptive Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
18

Working on KAMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-17005.902, Time=2.11 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-14574.593, Time=3.85 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-16796.316, Time=7.98 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-14572.593, Time=5.53 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=-17004.193, Time=2.45 sec
 ARIMA(1,3,2)(0,0,0)[0]             : AIC=-15176.063, Time=9.99 sec
 ARIMA(0,3,2)(0,0,0)[0]             : AIC=-16873.638, Time=9.63 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=-17008.756, Time=2.46 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=-17004.764, Time=3.05 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=-15723.849, Time=14.32 sec
 ARIMA(2,3,0)(0,0,0)[0] intercept   : AIC=-17006.756, Time=2.85 sec

Best model:  ARIMA(2,3,0)(0,0,0)[0]          
Total fit time: 64.230 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(2, 3, 0)   Log Likelihood                8528.378
Date:                Sun, 12 Dec 2021   AIC                         -17008.756
Time:                        16:14:14   BIC                         -16896.176
Sample:                             0   HQIC                        -16965.520
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1          -2.24e-15   7.41e-26  -3.02e+10      0.000   -2.24e-15   -2.24e-15
x2          8.461e-16    6.6e-26   1.28e+10      0.000    8.46e-16    8.46e-16
x3          4.901e-16   6.89e-26   7.11e+09      0.000     4.9e-16     4.9e-16
x4             1.0000   6.96e-26   1.44e+25      0.000       1.000       1.000
x5          5.931e-15   6.61e-26   8.97e+10      0.000    5.93e-15    5.93e-15
x6          -1.05e-15    1.5e-25     -7e+09      0.000   -1.05e-15   -1.05e-15
x7          1.439e-15   6.87e-26    2.1e+10      0.000    1.44e-15    1.44e-15
x8          -1.25e-15    6.8e-26  -1.84e+10      0.000   -1.25e-15   -1.25e-15
x9         -9.356e-17   8.97e-27  -1.04e+10      0.000   -9.36e-17   -9.36e-17
x10        -1.145e-16   2.88e-26  -3.98e+09      0.000   -1.15e-16   -1.15e-16
x11        -2.036e-16    6.8e-26     -3e+09      0.000   -2.04e-16   -2.04e-16
x12         5.951e-16   6.76e-26   8.81e+09      0.000    5.95e-16    5.95e-16
x13        -6.117e-17   6.94e-26  -8.81e+08      0.000   -6.12e-17   -6.12e-17
x14         1.167e-15   1.99e-25   5.85e+09      0.000    1.17e-15    1.17e-15
x15        -4.274e-14   6.99e-26  -6.11e+11      0.000   -4.27e-14   -4.27e-14
x16         2.262e-14   8.56e-26   2.64e+11      0.000    2.26e-14    2.26e-14
x17         3.384e-14   6.46e-26   5.24e+11      0.000    3.38e-14    3.38e-14
x18         9.894e-16    5.8e-26   1.71e+10      0.000    9.89e-16    9.89e-16
x19         4.115e-14   7.75e-26   5.31e+11      0.000    4.12e-14    4.12e-14
x20        -2.176e-15   9.49e-26  -2.29e+10      0.000   -2.18e-15   -2.18e-15
x21        -7.755e-17   4.63e-26  -1.67e+09      0.000   -7.75e-17   -7.75e-17
ar.L1         -0.9988   9.76e-22  -1.02e+21      0.000      -0.999      -0.999
ar.L2         -0.4972   4.07e-23  -1.22e+22      0.000      -0.497      -0.497
sigma2          1e-10   6.99e-11      1.432      0.152   -3.69e-11    2.37e-10
===================================================================================
Ljung-Box (L1) (Q):                  31.54   Jarque-Bera (JB):           2432532.03
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                            -0.15
Prob(H) (two-sided):                  0.00   Kurtosis:                       272.30
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 7.19e+48. Standard errors may be unstable.
ARIMA order: (2, 3, 0) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.04342, saving model to LSTM8.h5
45/45 - 4s - loss: 1.3084 - val_loss: 0.0434 - lr: 0.0010 - 4s/epoch - 78ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.04342
45/45 - 0s - loss: 1.0770 - val_loss: 0.0462 - lr: 0.0010 - 212ms/epoch - 5ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.04342
45/45 - 0s - loss: 0.9119 - val_loss: 0.0491 - lr: 0.0010 - 204ms/epoch - 5ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.04342
45/45 - 0s - loss: 0.8332 - val_loss: 0.0518 - lr: 0.0010 - 227ms/epoch - 5ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.04342
45/45 - 0s - loss: 0.7877 - val_loss: 0.0544 - lr: 0.0010 - 218ms/epoch - 5ms/step
Epoch 6/500

Epoch 00006: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00006: val_loss did not improve from 0.04342
45/45 - 0s - loss: 0.7566 - val_loss: 0.0570 - lr: 0.0010 - 207ms/epoch - 5ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.04342
45/45 - 0s - loss: 0.7413 - val_loss: 0.0573 - lr: 1.0000e-04 - 211ms/epoch - 5ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.04342
45/45 - 0s - loss: 0.7391 - val_loss: 0.0575 - lr: 1.0000e-04 - 211ms/epoch - 5ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.04342
45/45 - 0s - loss: 0.7368 - val_loss: 0.0578 - lr: 1.0000e-04 - 215ms/epoch - 5ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.04342
45/45 - 0s - loss: 0.7346 - val_loss: 0.0581 - lr: 1.0000e-04 - 216ms/epoch - 5ms/step
Epoch 11/500

Epoch 00011: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00011: val_loss did not improve from 0.04342
45/45 - 0s - loss: 0.7323 - val_loss: 0.0584 - lr: 1.0000e-04 - 219ms/epoch - 5ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.04342
45/45 - 0s - loss: 0.7309 - val_loss: 0.0584 - lr: 1.0000e-05 - 210ms/epoch - 5ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.04342
45/45 - 0s - loss: 0.7307 - val_loss: 0.0585 - lr: 1.0000e-05 - 214ms/epoch - 5ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.04342
45/45 - 0s - loss: 0.7304 - val_loss: 0.0585 - lr: 1.0000e-05 - 225ms/epoch - 5ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.04342
45/45 - 0s - loss: 0.7302 - val_loss: 0.0586 - lr: 1.0000e-05 - 223ms/epoch - 5ms/step
Epoch 16/500

Epoch 00016: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00016: val_loss did not improve from 0.04342
45/45 - 0s - loss: 0.7299 - val_loss: 0.0586 - lr: 1.0000e-05 - 219ms/epoch - 5ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.04342
45/45 - 0s - loss: 0.7297 - val_loss: 0.0586 - lr: 1.0000e-05 - 215ms/epoch - 5ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.04342
45/45 - 0s - loss: 0.7294 - val_loss: 0.0587 - lr: 1.0000e-05 - 226ms/epoch - 5ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.04342
45/45 - 0s - loss: 0.7292 - val_loss: 0.0587 - lr: 1.0000e-05 - 217ms/epoch - 5ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.04342
45/45 - 0s - loss: 0.7289 - val_loss: 0.0587 - lr: 1.0000e-05 - 211ms/epoch - 5ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.04342
45/45 - 0s - loss: 0.7287 - val_loss: 0.0588 - lr: 1.0000e-05 - 210ms/epoch - 5ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.04342
45/45 - 0s - loss: 0.7284 - val_loss: 0.0588 - lr: 1.0000e-05 - 218ms/epoch - 5ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.04342
45/45 - 0s - loss: 0.7281 - val_loss: 0.0589 - lr: 1.0000e-05 - 232ms/epoch - 5ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.04342
45/45 - 0s - loss: 0.7279 - val_loss: 0.0589 - lr: 1.0000e-05 - 219ms/epoch - 5ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.04342
45/45 - 0s - loss: 0.7276 - val_loss: 0.0590 - lr: 1.0000e-05 - 208ms/epoch - 5ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.04342
45/45 - 0s - loss: 0.7273 - val_loss: 0.0590 - lr: 1.0000e-05 - 201ms/epoch - 4ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.04342
45/45 - 0s - loss: 0.7271 - val_loss: 0.0590 - lr: 1.0000e-05 - 231ms/epoch - 5ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.04342
45/45 - 0s - loss: 0.7268 - val_loss: 0.0591 - lr: 1.0000e-05 - 219ms/epoch - 5ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.04342
45/45 - 0s - loss: 0.7265 - val_loss: 0.0591 - lr: 1.0000e-05 - 210ms/epoch - 5ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.04342
45/45 - 0s - loss: 0.7262 - val_loss: 0.0592 - lr: 1.0000e-05 - 206ms/epoch - 5ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.04342
45/45 - 0s - loss: 0.7260 - val_loss: 0.0592 - lr: 1.0000e-05 - 208ms/epoch - 5ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.04342
45/45 - 0s - loss: 0.7257 - val_loss: 0.0593 - lr: 1.0000e-05 - 214ms/epoch - 5ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.04342
45/45 - 0s - loss: 0.7254 - val_loss: 0.0593 - lr: 1.0000e-05 - 224ms/epoch - 5ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.04342
45/45 - 0s - loss: 0.7251 - val_loss: 0.0594 - lr: 1.0000e-05 - 215ms/epoch - 5ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.04342
45/45 - 0s - loss: 0.7248 - val_loss: 0.0594 - lr: 1.0000e-05 - 215ms/epoch - 5ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.04342
45/45 - 0s - loss: 0.7246 - val_loss: 0.0595 - lr: 1.0000e-05 - 216ms/epoch - 5ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.04342
45/45 - 0s - loss: 0.7243 - val_loss: 0.0595 - lr: 1.0000e-05 - 217ms/epoch - 5ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.04342
45/45 - 0s - loss: 0.7240 - val_loss: 0.0596 - lr: 1.0000e-05 - 212ms/epoch - 5ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.04342
45/45 - 0s - loss: 0.7237 - val_loss: 0.0596 - lr: 1.0000e-05 - 205ms/epoch - 5ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.04342
45/45 - 0s - loss: 0.7234 - val_loss: 0.0597 - lr: 1.0000e-05 - 224ms/epoch - 5ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.04342
45/45 - 0s - loss: 0.7231 - val_loss: 0.0598 - lr: 1.0000e-05 - 223ms/epoch - 5ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.04342
45/45 - 0s - loss: 0.7228 - val_loss: 0.0598 - lr: 1.0000e-05 - 215ms/epoch - 5ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.04342
45/45 - 0s - loss: 0.7226 - val_loss: 0.0599 - lr: 1.0000e-05 - 217ms/epoch - 5ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.04342
45/45 - 0s - loss: 0.7223 - val_loss: 0.0599 - lr: 1.0000e-05 - 215ms/epoch - 5ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.04342
45/45 - 0s - loss: 0.7220 - val_loss: 0.0600 - lr: 1.0000e-05 - 220ms/epoch - 5ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.04342
45/45 - 0s - loss: 0.7217 - val_loss: 0.0600 - lr: 1.0000e-05 - 227ms/epoch - 5ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.04342
45/45 - 0s - loss: 0.7214 - val_loss: 0.0601 - lr: 1.0000e-05 - 224ms/epoch - 5ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.04342
45/45 - 0s - loss: 0.7211 - val_loss: 0.0602 - lr: 1.0000e-05 - 207ms/epoch - 5ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.04342
45/45 - 0s - loss: 0.7208 - val_loss: 0.0602 - lr: 1.0000e-05 - 207ms/epoch - 5ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.04342
45/45 - 0s - loss: 0.7205 - val_loss: 0.0603 - lr: 1.0000e-05 - 237ms/epoch - 5ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.04342
45/45 - 0s - loss: 0.7203 - val_loss: 0.0603 - lr: 1.0000e-05 - 227ms/epoch - 5ms/step
Epoch 00051: early stopping
SMA
Prediction vs Close:		54.85% Accuracy
Prediction vs Prediction:	47.01% Accuracy
MSE:	 27.593274833467863 
RMSE:	 5.252930118844897 
MAPE:	 4.117405060238624

EMA
Prediction vs Close:		54.85% Accuracy
Prediction vs Prediction:	49.63% Accuracy
MSE:	 36.00024912834101 
RMSE:	 6.000020760659167 
MAPE:	 4.70426166066095

WMA
Prediction vs Close:		54.48% Accuracy
Prediction vs Prediction:	46.27% Accuracy
MSE:	 41.43257052147296 
RMSE:	 6.4368136932393005 
MAPE:	 5.073793611210466

DEMA
Prediction vs Close:		52.24% Accuracy
Prediction vs Prediction:	45.15% Accuracy
MSE:	 181.24518187853474 
RMSE:	 13.462733076108087 
MAPE:	 12.35792879577932

KAMA
Prediction vs Close:		56.72% Accuracy
Prediction vs Prediction:	46.27% Accuracy
MSE:	 27.433326731723586 
RMSE:	 5.237683336335214 
MAPE:	 4.154161853007022
MIDPOINT
MIDPOINT([input_arrays], [timeperiod=14])

MidPoint over period (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 14
Outputs:
    real
14

Working on MIDPOINT predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-17005.753, Time=2.34 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-14574.592, Time=3.97 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-16288.639, Time=11.00 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-14572.592, Time=5.50 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=-15275.254, Time=7.01 sec
 ARIMA(1,3,2)(0,0,0)[0]             : AIC=-15486.751, Time=12.68 sec
 ARIMA(0,3,2)(0,0,0)[0]             : AIC=48.000, Time=0.54 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=-17008.491, Time=2.31 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=-17004.554, Time=3.03 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=-17087.445, Time=6.14 sec
 ARIMA(3,3,2)(0,0,0)[0]             : AIC=-15686.421, Time=10.07 sec
 ARIMA(2,3,2)(0,0,0)[0]             : AIC=-17030.168, Time=13.88 sec
 ARIMA(3,3,1)(0,0,0)[0] intercept   : AIC=-15138.715, Time=14.55 sec

Best model:  ARIMA(3,3,1)(0,0,0)[0]          
Total fit time: 93.028 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(3, 3, 1)   Log Likelihood                8569.722
Date:                Sun, 12 Dec 2021   AIC                         -17087.445
Time:                        16:16:53   BIC                         -16965.483
Sample:                             0   HQIC                        -17040.607
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1          -2.14e-10   1.09e-20  -1.96e+10      0.000   -2.14e-10   -2.14e-10
x2         -2.126e-10   1.13e-20  -1.88e+10      0.000   -2.13e-10   -2.13e-10
x3         -2.175e-10   1.06e-20  -2.04e+10      0.000   -2.17e-10   -2.17e-10
x4             1.0000    1.1e-20   9.11e+19      0.000       1.000       1.000
x5         -1.941e-10   1.05e-20  -1.86e+10      0.000   -1.94e-10   -1.94e-10
x6         -4.131e-09   7.64e-20   -5.4e+10      0.000   -4.13e-09   -4.13e-09
x7         -1.965e-10   1.05e-20  -1.86e+10      0.000   -1.96e-10   -1.96e-10
x8         -1.961e-10   1.07e-20  -1.84e+10      0.000   -1.96e-10   -1.96e-10
x9         -1.005e-10   9.12e-22   -1.1e+11      0.000      -1e-10      -1e-10
x10        -1.739e-10   3.37e-21  -5.16e+10      0.000   -1.74e-10   -1.74e-10
x11        -1.941e-10   1.07e-20  -1.82e+10      0.000   -1.94e-10   -1.94e-10
x12        -2.005e-10   1.06e-20  -1.89e+10      0.000      -2e-10      -2e-10
x13        -2.056e-10   1.07e-20  -1.91e+10      0.000   -2.06e-10   -2.06e-10
x14        -1.687e-09   3.15e-20  -5.36e+10      0.000   -1.69e-09   -1.69e-09
x15        -2.365e-10   1.17e-20  -2.01e+10      0.000   -2.36e-10   -2.36e-10
x16        -1.523e-10   9.42e-21  -1.62e+10      0.000   -1.52e-10   -1.52e-10
x17        -1.491e-10   9.33e-21   -1.6e+10      0.000   -1.49e-10   -1.49e-10
x18        -6.404e-10   1.93e-20  -3.32e+10      0.000    -6.4e-10    -6.4e-10
x19        -2.596e-10   1.23e-20  -2.11e+10      0.000    -2.6e-10    -2.6e-10
x20        -6.246e-10   1.91e-20  -3.28e+10      0.000   -6.25e-10   -6.25e-10
x21        -1.953e-09   2.16e-20  -9.04e+10      0.000   -1.95e-09   -1.95e-09
ar.L1         -0.4914   1.46e-22  -3.38e+21      0.000      -0.491      -0.491
ar.L2         -0.1934   8.48e-23  -2.28e+21      0.000      -0.193      -0.193
ar.L3         -0.0491    4.2e-23  -1.17e+21      0.000      -0.049      -0.049
ma.L1         -0.7092   3.33e-22  -2.13e+21      0.000      -0.709      -0.709
sigma2       8.99e-11   6.95e-11      1.293      0.196   -4.64e-11    2.26e-10
===================================================================================
Ljung-Box (L1) (Q):                  32.51   Jarque-Bera (JB):             49038.38
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.35   Skew:                             1.06
Prob(H) (two-sided):                  0.00   Kurtosis:                        41.18
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 1.71e+40. Standard errors may be unstable.
ARIMA order: (3, 3, 1) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.04338, saving model to LSTM8.h5
58/58 - 4s - loss: 1.2850 - val_loss: 0.0434 - lr: 0.0010 - 4s/epoch - 67ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.04338
58/58 - 0s - loss: 1.1296 - val_loss: 0.0475 - lr: 0.0010 - 279ms/epoch - 5ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.04338
58/58 - 0s - loss: 0.9824 - val_loss: 0.0530 - lr: 0.0010 - 297ms/epoch - 5ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.04338
58/58 - 0s - loss: 0.8742 - val_loss: 0.0622 - lr: 0.0010 - 297ms/epoch - 5ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.04338
58/58 - 0s - loss: 0.8035 - val_loss: 0.0747 - lr: 0.0010 - 278ms/epoch - 5ms/step
Epoch 6/500

Epoch 00006: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00006: val_loss did not improve from 0.04338
58/58 - 0s - loss: 0.7564 - val_loss: 0.0883 - lr: 0.0010 - 265ms/epoch - 5ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.04338
58/58 - 0s - loss: 0.7343 - val_loss: 0.0896 - lr: 1.0000e-04 - 295ms/epoch - 5ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.04338
58/58 - 0s - loss: 0.7312 - val_loss: 0.0909 - lr: 1.0000e-04 - 278ms/epoch - 5ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.04338
58/58 - 0s - loss: 0.7280 - val_loss: 0.0923 - lr: 1.0000e-04 - 274ms/epoch - 5ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.04338
58/58 - 0s - loss: 0.7249 - val_loss: 0.0938 - lr: 1.0000e-04 - 292ms/epoch - 5ms/step
Epoch 11/500

Epoch 00011: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00011: val_loss did not improve from 0.04338
58/58 - 0s - loss: 0.7218 - val_loss: 0.0952 - lr: 1.0000e-04 - 275ms/epoch - 5ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.04338
58/58 - 0s - loss: 0.7199 - val_loss: 0.0954 - lr: 1.0000e-05 - 276ms/epoch - 5ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.04338
58/58 - 0s - loss: 0.7196 - val_loss: 0.0956 - lr: 1.0000e-05 - 263ms/epoch - 5ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.04338
58/58 - 0s - loss: 0.7193 - val_loss: 0.0957 - lr: 1.0000e-05 - 283ms/epoch - 5ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.04338
58/58 - 0s - loss: 0.7190 - val_loss: 0.0959 - lr: 1.0000e-05 - 268ms/epoch - 5ms/step
Epoch 16/500

Epoch 00016: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00016: val_loss did not improve from 0.04338
58/58 - 0s - loss: 0.7186 - val_loss: 0.0960 - lr: 1.0000e-05 - 282ms/epoch - 5ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.04338
58/58 - 0s - loss: 0.7183 - val_loss: 0.0962 - lr: 1.0000e-05 - 280ms/epoch - 5ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.04338
58/58 - 0s - loss: 0.7180 - val_loss: 0.0964 - lr: 1.0000e-05 - 285ms/epoch - 5ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.04338
58/58 - 0s - loss: 0.7176 - val_loss: 0.0966 - lr: 1.0000e-05 - 263ms/epoch - 5ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.04338
58/58 - 0s - loss: 0.7173 - val_loss: 0.0967 - lr: 1.0000e-05 - 267ms/epoch - 5ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.04338
58/58 - 0s - loss: 0.7169 - val_loss: 0.0969 - lr: 1.0000e-05 - 280ms/epoch - 5ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.04338
58/58 - 0s - loss: 0.7166 - val_loss: 0.0971 - lr: 1.0000e-05 - 269ms/epoch - 5ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.04338
58/58 - 0s - loss: 0.7162 - val_loss: 0.0973 - lr: 1.0000e-05 - 268ms/epoch - 5ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.04338
58/58 - 0s - loss: 0.7159 - val_loss: 0.0975 - lr: 1.0000e-05 - 274ms/epoch - 5ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.04338
58/58 - 0s - loss: 0.7155 - val_loss: 0.0977 - lr: 1.0000e-05 - 277ms/epoch - 5ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.04338
58/58 - 0s - loss: 0.7152 - val_loss: 0.0979 - lr: 1.0000e-05 - 272ms/epoch - 5ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.04338
58/58 - 0s - loss: 0.7148 - val_loss: 0.0981 - lr: 1.0000e-05 - 279ms/epoch - 5ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.04338
58/58 - 0s - loss: 0.7144 - val_loss: 0.0983 - lr: 1.0000e-05 - 280ms/epoch - 5ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.04338
58/58 - 0s - loss: 0.7141 - val_loss: 0.0985 - lr: 1.0000e-05 - 270ms/epoch - 5ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.04338
58/58 - 0s - loss: 0.7137 - val_loss: 0.0987 - lr: 1.0000e-05 - 278ms/epoch - 5ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.04338
58/58 - 0s - loss: 0.7133 - val_loss: 0.0989 - lr: 1.0000e-05 - 276ms/epoch - 5ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.04338
58/58 - 0s - loss: 0.7130 - val_loss: 0.0991 - lr: 1.0000e-05 - 280ms/epoch - 5ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.04338
58/58 - 0s - loss: 0.7126 - val_loss: 0.0993 - lr: 1.0000e-05 - 271ms/epoch - 5ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.04338
58/58 - 0s - loss: 0.7122 - val_loss: 0.0996 - lr: 1.0000e-05 - 263ms/epoch - 5ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.04338
58/58 - 0s - loss: 0.7118 - val_loss: 0.0998 - lr: 1.0000e-05 - 273ms/epoch - 5ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.04338
58/58 - 0s - loss: 0.7115 - val_loss: 0.1000 - lr: 1.0000e-05 - 278ms/epoch - 5ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.04338
58/58 - 0s - loss: 0.7111 - val_loss: 0.1002 - lr: 1.0000e-05 - 290ms/epoch - 5ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.04338
58/58 - 0s - loss: 0.7107 - val_loss: 0.1004 - lr: 1.0000e-05 - 268ms/epoch - 5ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.04338
58/58 - 0s - loss: 0.7103 - val_loss: 0.1006 - lr: 1.0000e-05 - 295ms/epoch - 5ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.04338
58/58 - 0s - loss: 0.7100 - val_loss: 0.1009 - lr: 1.0000e-05 - 280ms/epoch - 5ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.04338
58/58 - 0s - loss: 0.7096 - val_loss: 0.1011 - lr: 1.0000e-05 - 272ms/epoch - 5ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.04338
58/58 - 0s - loss: 0.7092 - val_loss: 0.1013 - lr: 1.0000e-05 - 269ms/epoch - 5ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.04338
58/58 - 0s - loss: 0.7088 - val_loss: 0.1015 - lr: 1.0000e-05 - 286ms/epoch - 5ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.04338
58/58 - 0s - loss: 0.7085 - val_loss: 0.1018 - lr: 1.0000e-05 - 256ms/epoch - 4ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.04338
58/58 - 0s - loss: 0.7081 - val_loss: 0.1020 - lr: 1.0000e-05 - 270ms/epoch - 5ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.04338
58/58 - 0s - loss: 0.7077 - val_loss: 0.1022 - lr: 1.0000e-05 - 267ms/epoch - 5ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.04338
58/58 - 0s - loss: 0.7073 - val_loss: 0.1024 - lr: 1.0000e-05 - 273ms/epoch - 5ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.04338
58/58 - 0s - loss: 0.7070 - val_loss: 0.1027 - lr: 1.0000e-05 - 256ms/epoch - 4ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.04338
58/58 - 0s - loss: 0.7066 - val_loss: 0.1029 - lr: 1.0000e-05 - 258ms/epoch - 4ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.04338
58/58 - 0s - loss: 0.7062 - val_loss: 0.1031 - lr: 1.0000e-05 - 285ms/epoch - 5ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.04338
58/58 - 0s - loss: 0.7058 - val_loss: 0.1034 - lr: 1.0000e-05 - 270ms/epoch - 5ms/step
Epoch 00051: early stopping
SMA
Prediction vs Close:		54.85% Accuracy
Prediction vs Prediction:	47.01% Accuracy
MSE:	 27.593274833467863 
RMSE:	 5.252930118844897 
MAPE:	 4.117405060238624

EMA
Prediction vs Close:		54.85% Accuracy
Prediction vs Prediction:	49.63% Accuracy
MSE:	 36.00024912834101 
RMSE:	 6.000020760659167 
MAPE:	 4.70426166066095

WMA
Prediction vs Close:		54.48% Accuracy
Prediction vs Prediction:	46.27% Accuracy
MSE:	 41.43257052147296 
RMSE:	 6.4368136932393005 
MAPE:	 5.073793611210466

DEMA
Prediction vs Close:		52.24% Accuracy
Prediction vs Prediction:	45.15% Accuracy
MSE:	 181.24518187853474 
RMSE:	 13.462733076108087 
MAPE:	 12.35792879577932

KAMA
Prediction vs Close:		56.72% Accuracy
Prediction vs Prediction:	46.27% Accuracy
MSE:	 27.433326731723586 
RMSE:	 5.237683336335214 
MAPE:	 4.154161853007022

MIDPOINT
Prediction vs Close:		51.87% Accuracy
Prediction vs Prediction:	44.78% Accuracy
MSE:	 17.391761137690516 
RMSE:	 4.170343047962663 
MAPE:	 3.393694867236689
T3
T3([input_arrays], [timeperiod=5], [vfactor=0.7])

Triple Exponential Moving Average (T3) (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 5
    vfactor: 0.7
Outputs:
    real
19

Working on T3 predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-16569.270, Time=2.34 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-14511.291, Time=2.50 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-15408.738, Time=7.88 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-15165.005, Time=8.17 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=-15595.465, Time=7.31 sec
 ARIMA(1,3,2)(0,0,0)[0]             : AIC=-15837.470, Time=9.86 sec
 ARIMA(0,3,2)(0,0,0)[0]             : AIC=-15491.538, Time=9.06 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=-16378.438, Time=2.58 sec
/usr/local/lib/python3.7/dist-packages/statsmodels/tsa/statespace/sarimax.py:1906: RuntimeWarning: divide by zero encountered in reciprocal
  return np.roots(self.polynomial_reduced_ma)**-1
 ARIMA(2,3,2)(0,0,0)[0]             : AIC=-16318.604, Time=3.56 sec
 ARIMA(1,3,1)(0,0,0)[0] intercept   : AIC=-16567.270, Time=2.35 sec

Best model:  ARIMA(1,3,1)(0,0,0)[0]          
Total fit time: 55.608 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(1, 3, 1)   Log Likelihood                8308.635
Date:                Sun, 12 Dec 2021   AIC                         -16569.270
Time:                        16:25:22   BIC                         -16456.690
Sample:                             0   HQIC                        -16526.035
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1          1.355e-13   3.43e-05   3.95e-09      1.000   -6.72e-05    6.72e-05
x2          5.009e-14   2.67e-05   1.88e-09      1.000   -5.23e-05    5.23e-05
x3         -9.101e-15   2.19e-05  -4.16e-10      1.000   -4.29e-05    4.29e-05
x4             1.0000   2.97e-05   3.37e+04      0.000       1.000       1.000
x5          3.626e-12    3.2e-05   1.13e-07      1.000   -6.28e-05    6.28e-05
x6          6.879e-17      0.000   5.13e-13      1.000      -0.000       0.000
x7          1.588e-13   4.04e-05   3.93e-09      1.000   -7.92e-05    7.92e-05
x8            -0.0002   9.77e-06    -20.395      0.000      -0.000      -0.000
x9          3.877e-14      0.001   6.24e-11      1.000      -0.001       0.001
x10         -7.41e-05      0.001     -0.129      0.897      -0.001       0.001
x11            0.0003   4.91e-05      6.926      0.000       0.000       0.000
x12           -0.0004   7.27e-05     -5.556      0.000      -0.001      -0.000
x13        -2.679e-14   3.39e-05   -7.9e-10      1.000   -6.65e-05    6.65e-05
x14          2.97e-13      0.000   2.31e-09      1.000      -0.000       0.000
x15         1.602e-12   7.47e-05   2.14e-08      1.000      -0.000       0.000
x16        -8.756e-13   4.29e-05  -2.04e-08      1.000   -8.41e-05    8.41e-05
x17         1.793e-12   6.56e-05   2.74e-08      1.000      -0.000       0.000
x18        -1.019e-13      0.000  -5.54e-10      1.000      -0.000       0.000
x19        -1.077e-12   8.29e-05   -1.3e-08      1.000      -0.000       0.000
x20         1.771e-13   8.45e-05    2.1e-09      1.000      -0.000       0.000
x21         9.233e-16      0.000   1.94e-12      1.000      -0.001       0.001
ar.L1         -0.2857      0.000  -2747.572      0.000      -0.286      -0.285
ma.L1         -0.9142   7.12e-06  -1.28e+05      0.000      -0.914      -0.914
sigma2          1e-10      7e-11      1.429      0.153   -3.71e-11    2.37e-10
===================================================================================
Ljung-Box (L1) (Q):                  84.32   Jarque-Bera (JB):           4804295.53
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                             5.87
Prob(H) (two-sided):                  0.00   Kurtosis:                       381.28
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 2.06e+20. Standard errors may be unstable.
ARIMA order: (1, 3, 1) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.05022, saving model to LSTM8.h5
43/43 - 4s - loss: 1.4172 - val_loss: 0.0502 - lr: 0.0010 - 4s/epoch - 87ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.05022
43/43 - 0s - loss: 1.3610 - val_loss: 0.0527 - lr: 0.0010 - 220ms/epoch - 5ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.05022
43/43 - 0s - loss: 1.2831 - val_loss: 0.0558 - lr: 0.0010 - 208ms/epoch - 5ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.05022
43/43 - 0s - loss: 1.2176 - val_loss: 0.0592 - lr: 0.0010 - 228ms/epoch - 5ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.05022
43/43 - 0s - loss: 1.1635 - val_loss: 0.0629 - lr: 0.0010 - 205ms/epoch - 5ms/step
Epoch 6/500

Epoch 00006: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00006: val_loss did not improve from 0.05022
43/43 - 0s - loss: 1.1164 - val_loss: 0.0668 - lr: 0.0010 - 211ms/epoch - 5ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.05022
43/43 - 0s - loss: 1.0893 - val_loss: 0.0672 - lr: 1.0000e-04 - 198ms/epoch - 5ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.05022
43/43 - 0s - loss: 1.0853 - val_loss: 0.0676 - lr: 1.0000e-04 - 212ms/epoch - 5ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.05022
43/43 - 0s - loss: 1.0813 - val_loss: 0.0680 - lr: 1.0000e-04 - 211ms/epoch - 5ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.05022
43/43 - 0s - loss: 1.0773 - val_loss: 0.0685 - lr: 1.0000e-04 - 197ms/epoch - 5ms/step
Epoch 11/500

Epoch 00011: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00011: val_loss did not improve from 0.05022
43/43 - 0s - loss: 1.0734 - val_loss: 0.0689 - lr: 1.0000e-04 - 206ms/epoch - 5ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.05022
43/43 - 0s - loss: 1.0709 - val_loss: 0.0690 - lr: 1.0000e-05 - 195ms/epoch - 5ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.05022
43/43 - 0s - loss: 1.0705 - val_loss: 0.0690 - lr: 1.0000e-05 - 209ms/epoch - 5ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.05022
43/43 - 0s - loss: 1.0702 - val_loss: 0.0690 - lr: 1.0000e-05 - 212ms/epoch - 5ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.05022
43/43 - 0s - loss: 1.0698 - val_loss: 0.0691 - lr: 1.0000e-05 - 206ms/epoch - 5ms/step
Epoch 16/500

Epoch 00016: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00016: val_loss did not improve from 0.05022
43/43 - 0s - loss: 1.0694 - val_loss: 0.0691 - lr: 1.0000e-05 - 198ms/epoch - 5ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.05022
43/43 - 0s - loss: 1.0690 - val_loss: 0.0692 - lr: 1.0000e-05 - 205ms/epoch - 5ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.05022
43/43 - 0s - loss: 1.0686 - val_loss: 0.0692 - lr: 1.0000e-05 - 207ms/epoch - 5ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.05022
43/43 - 0s - loss: 1.0682 - val_loss: 0.0693 - lr: 1.0000e-05 - 213ms/epoch - 5ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.05022
43/43 - 0s - loss: 1.0678 - val_loss: 0.0693 - lr: 1.0000e-05 - 207ms/epoch - 5ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.05022
43/43 - 0s - loss: 1.0674 - val_loss: 0.0694 - lr: 1.0000e-05 - 208ms/epoch - 5ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.05022
43/43 - 0s - loss: 1.0670 - val_loss: 0.0694 - lr: 1.0000e-05 - 218ms/epoch - 5ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.05022
43/43 - 0s - loss: 1.0666 - val_loss: 0.0695 - lr: 1.0000e-05 - 213ms/epoch - 5ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.05022
43/43 - 0s - loss: 1.0662 - val_loss: 0.0695 - lr: 1.0000e-05 - 207ms/epoch - 5ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.05022
43/43 - 0s - loss: 1.0658 - val_loss: 0.0696 - lr: 1.0000e-05 - 214ms/epoch - 5ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.05022
43/43 - 0s - loss: 1.0654 - val_loss: 0.0696 - lr: 1.0000e-05 - 213ms/epoch - 5ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.05022
43/43 - 0s - loss: 1.0650 - val_loss: 0.0697 - lr: 1.0000e-05 - 215ms/epoch - 5ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.05022
43/43 - 0s - loss: 1.0647 - val_loss: 0.0697 - lr: 1.0000e-05 - 218ms/epoch - 5ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.05022
43/43 - 0s - loss: 1.0643 - val_loss: 0.0698 - lr: 1.0000e-05 - 205ms/epoch - 5ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.05022
43/43 - 0s - loss: 1.0639 - val_loss: 0.0698 - lr: 1.0000e-05 - 206ms/epoch - 5ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.05022
43/43 - 0s - loss: 1.0635 - val_loss: 0.0699 - lr: 1.0000e-05 - 236ms/epoch - 5ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.05022
43/43 - 0s - loss: 1.0631 - val_loss: 0.0699 - lr: 1.0000e-05 - 215ms/epoch - 5ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.05022
43/43 - 0s - loss: 1.0627 - val_loss: 0.0700 - lr: 1.0000e-05 - 219ms/epoch - 5ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.05022
43/43 - 0s - loss: 1.0623 - val_loss: 0.0700 - lr: 1.0000e-05 - 195ms/epoch - 5ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.05022
43/43 - 0s - loss: 1.0619 - val_loss: 0.0701 - lr: 1.0000e-05 - 200ms/epoch - 5ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.05022
43/43 - 0s - loss: 1.0615 - val_loss: 0.0701 - lr: 1.0000e-05 - 207ms/epoch - 5ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.05022
43/43 - 0s - loss: 1.0611 - val_loss: 0.0702 - lr: 1.0000e-05 - 205ms/epoch - 5ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.05022
43/43 - 0s - loss: 1.0607 - val_loss: 0.0702 - lr: 1.0000e-05 - 213ms/epoch - 5ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.05022
43/43 - 0s - loss: 1.0603 - val_loss: 0.0703 - lr: 1.0000e-05 - 206ms/epoch - 5ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.05022
43/43 - 0s - loss: 1.0599 - val_loss: 0.0704 - lr: 1.0000e-05 - 199ms/epoch - 5ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.05022
43/43 - 0s - loss: 1.0595 - val_loss: 0.0704 - lr: 1.0000e-05 - 209ms/epoch - 5ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.05022
43/43 - 0s - loss: 1.0591 - val_loss: 0.0705 - lr: 1.0000e-05 - 208ms/epoch - 5ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.05022
43/43 - 0s - loss: 1.0587 - val_loss: 0.0705 - lr: 1.0000e-05 - 209ms/epoch - 5ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.05022
43/43 - 0s - loss: 1.0583 - val_loss: 0.0706 - lr: 1.0000e-05 - 206ms/epoch - 5ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.05022
43/43 - 0s - loss: 1.0579 - val_loss: 0.0706 - lr: 1.0000e-05 - 197ms/epoch - 5ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.05022
43/43 - 0s - loss: 1.0576 - val_loss: 0.0707 - lr: 1.0000e-05 - 218ms/epoch - 5ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.05022
43/43 - 0s - loss: 1.0572 - val_loss: 0.0707 - lr: 1.0000e-05 - 209ms/epoch - 5ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.05022
43/43 - 0s - loss: 1.0568 - val_loss: 0.0708 - lr: 1.0000e-05 - 207ms/epoch - 5ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.05022
43/43 - 0s - loss: 1.0564 - val_loss: 0.0708 - lr: 1.0000e-05 - 213ms/epoch - 5ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.05022
43/43 - 0s - loss: 1.0560 - val_loss: 0.0709 - lr: 1.0000e-05 - 197ms/epoch - 5ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.05022
43/43 - 0s - loss: 1.0556 - val_loss: 0.0710 - lr: 1.0000e-05 - 211ms/epoch - 5ms/step
Epoch 00051: early stopping
SMA
Prediction vs Close:		54.85% Accuracy
Prediction vs Prediction:	47.01% Accuracy
MSE:	 27.593274833467863 
RMSE:	 5.252930118844897 
MAPE:	 4.117405060238624

EMA
Prediction vs Close:		54.85% Accuracy
Prediction vs Prediction:	49.63% Accuracy
MSE:	 36.00024912834101 
RMSE:	 6.000020760659167 
MAPE:	 4.70426166066095

WMA
Prediction vs Close:		54.48% Accuracy
Prediction vs Prediction:	46.27% Accuracy
MSE:	 41.43257052147296 
RMSE:	 6.4368136932393005 
MAPE:	 5.073793611210466

DEMA
Prediction vs Close:		52.24% Accuracy
Prediction vs Prediction:	45.15% Accuracy
MSE:	 181.24518187853474 
RMSE:	 13.462733076108087 
MAPE:	 12.35792879577932

KAMA
Prediction vs Close:		56.72% Accuracy
Prediction vs Prediction:	46.27% Accuracy
MSE:	 27.433326731723586 
RMSE:	 5.237683336335214 
MAPE:	 4.154161853007022

MIDPOINT
Prediction vs Close:		51.87% Accuracy
Prediction vs Prediction:	44.78% Accuracy
MSE:	 17.391761137690516 
RMSE:	 4.170343047962663 
MAPE:	 3.393694867236689

T3
Prediction vs Close:		53.73% Accuracy
Prediction vs Prediction:	47.76% Accuracy
MSE:	 65.97403618729489 
RMSE:	 8.122440285240321 
MAPE:	 6.528054740759965
TEMA
TEMA([input_arrays], [timeperiod=30])

Triple Exponential Moving Average (Overlap Studies)

Inputs:
    price: (any ndarray)
Parameters:
    timeperiod: 30
Outputs:
    real
9

Working on TEMA predictions
parameters used :  808 269
Performing stepwise search to minimize aic
 ARIMA(1,3,1)(0,0,0)[0]             : AIC=-16493.570, Time=2.72 sec
 ARIMA(0,3,0)(0,0,0)[0]             : AIC=-15527.581, Time=7.34 sec
 ARIMA(1,3,0)(0,0,0)[0]             : AIC=-16154.477, Time=7.07 sec
 ARIMA(0,3,1)(0,0,0)[0]             : AIC=-15134.948, Time=6.40 sec
 ARIMA(2,3,1)(0,0,0)[0]             : AIC=-16538.454, Time=8.03 sec
 ARIMA(2,3,0)(0,0,0)[0]             : AIC=-16271.346, Time=2.46 sec
 ARIMA(3,3,1)(0,0,0)[0]             : AIC=-16350.992, Time=12.60 sec
/usr/local/lib/python3.7/dist-packages/statsmodels/tsa/statespace/sarimax.py:1906: RuntimeWarning: divide by zero encountered in reciprocal
  return np.roots(self.polynomial_reduced_ma)**-1
 ARIMA(2,3,2)(0,0,0)[0]             : AIC=-16200.149, Time=3.43 sec
 ARIMA(1,3,2)(0,0,0)[0]             : AIC=-16461.809, Time=16.42 sec
 ARIMA(3,3,0)(0,0,0)[0]             : AIC=-16384.147, Time=3.34 sec
 ARIMA(3,3,2)(0,0,0)[0]             : AIC=inf, Time=8.64 sec
 ARIMA(2,3,1)(0,0,0)[0] intercept   : AIC=-15110.164, Time=5.24 sec

Best model:  ARIMA(2,3,1)(0,0,0)[0]          
Total fit time: 83.700 seconds
                               SARIMAX Results                                
==============================================================================
Dep. Variable:                      y   No. Observations:                  808
Model:               SARIMAX(2, 3, 1)   Log Likelihood                8294.227
Date:                Sun, 12 Dec 2021   AIC                         -16538.454
Time:                        16:29:48   BIC                         -16421.183
Sample:                             0   HQIC                        -16493.417
                                - 808                                         
Covariance Type:                  opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
x1          3.591e-07      0.001      0.000      1.000      -0.002       0.002
x2            3.6e-07      0.002      0.000      1.000      -0.003       0.003
x3          3.611e-07      0.001      0.000      1.000      -0.002       0.002
x4             1.0000      0.000   2628.605      0.000       0.999       1.001
x5          3.432e-07      0.000      0.001      0.999      -0.001       0.001
x6          1.714e-07   4.05e-05      0.004      0.997   -7.91e-05    7.95e-05
x7          3.541e-07      0.001      0.000      1.000      -0.003       0.003
x8            -0.0002      0.000     -1.006      0.315      -0.001       0.000
x9         -7.559e-08      0.000     -0.000      1.000      -0.001       0.001
x10            0.0001      0.000      0.492      0.623      -0.000       0.001
x11           -0.0006      0.000     -2.697      0.007      -0.001      -0.000
x12            0.0005      0.000      1.741      0.082   -5.97e-05       0.001
x13           3.6e-07      0.000      0.002      0.999      -0.000       0.000
x14         1.003e-06      0.001      0.001      0.999      -0.002       0.002
x15         3.506e-07   7.16e-05      0.005      0.996      -0.000       0.000
x16         5.157e-07      0.000      0.005      0.996      -0.000       0.000
x17         3.516e-07   6.59e-05      0.005      0.996      -0.000       0.000
x18         1.166e-07      0.000      0.001      1.000      -0.000       0.000
x19         3.922e-07    7.5e-05      0.005      0.996      -0.000       0.000
x20         -3.64e-07      0.000     -0.002      0.999      -0.000       0.000
x21         4.458e-07      0.000      0.004      0.997      -0.000       0.000
ar.L1         -0.4019   4.12e-05  -9758.484      0.000      -0.402      -0.402
ar.L2         -0.1006   1.58e-05  -6360.873      0.000      -0.101      -0.101
ma.L1         -0.7963   8.45e-06  -9.43e+04      0.000      -0.796      -0.796
sigma2      9.048e-11    7.2e-11      1.257      0.209   -5.06e-11    2.32e-10
===================================================================================
Ljung-Box (L1) (Q):                  64.02   Jarque-Bera (JB):           4424775.47
Prob(Q):                              0.00   Prob(JB):                         0.00
Heteroskedasticity (H):               0.00   Skew:                             5.53
Prob(H) (two-sided):                  0.00   Kurtosis:                       366.04
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 1.02e+20. Standard errors may be unstable.
ARIMA order: (2, 3, 1) 

Epoch 1/500

Epoch 00001: val_loss improved from inf to 0.04701, saving model to LSTM8.h5
90/90 - 4s - loss: 1.2615 - val_loss: 0.0470 - lr: 0.0010 - 4s/epoch - 41ms/step
Epoch 2/500

Epoch 00002: val_loss did not improve from 0.04701
90/90 - 0s - loss: 1.0630 - val_loss: 0.0557 - lr: 0.0010 - 390ms/epoch - 4ms/step
Epoch 3/500

Epoch 00003: val_loss did not improve from 0.04701
90/90 - 0s - loss: 0.9138 - val_loss: 0.0651 - lr: 0.0010 - 409ms/epoch - 5ms/step
Epoch 4/500

Epoch 00004: val_loss did not improve from 0.04701
90/90 - 0s - loss: 0.8175 - val_loss: 0.0747 - lr: 0.0010 - 395ms/epoch - 4ms/step
Epoch 5/500

Epoch 00005: val_loss did not improve from 0.04701
90/90 - 0s - loss: 0.7588 - val_loss: 0.0838 - lr: 0.0010 - 389ms/epoch - 4ms/step
Epoch 6/500

Epoch 00006: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.

Epoch 00006: val_loss did not improve from 0.04701
90/90 - 0s - loss: 0.7204 - val_loss: 0.0927 - lr: 0.0010 - 411ms/epoch - 5ms/step
Epoch 7/500

Epoch 00007: val_loss did not improve from 0.04701
90/90 - 0s - loss: 0.7024 - val_loss: 0.0935 - lr: 1.0000e-04 - 430ms/epoch - 5ms/step
Epoch 8/500

Epoch 00008: val_loss did not improve from 0.04701
90/90 - 0s - loss: 0.6997 - val_loss: 0.0945 - lr: 1.0000e-04 - 427ms/epoch - 5ms/step
Epoch 9/500

Epoch 00009: val_loss did not improve from 0.04701
90/90 - 0s - loss: 0.6970 - val_loss: 0.0955 - lr: 1.0000e-04 - 414ms/epoch - 5ms/step
Epoch 10/500

Epoch 00010: val_loss did not improve from 0.04701
90/90 - 0s - loss: 0.6943 - val_loss: 0.0965 - lr: 1.0000e-04 - 419ms/epoch - 5ms/step
Epoch 11/500

Epoch 00011: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.

Epoch 00011: val_loss did not improve from 0.04701
90/90 - 0s - loss: 0.6916 - val_loss: 0.0975 - lr: 1.0000e-04 - 400ms/epoch - 4ms/step
Epoch 12/500

Epoch 00012: val_loss did not improve from 0.04701
90/90 - 0s - loss: 0.6899 - val_loss: 0.0977 - lr: 1.0000e-05 - 411ms/epoch - 5ms/step
Epoch 13/500

Epoch 00013: val_loss did not improve from 0.04701
90/90 - 0s - loss: 0.6896 - val_loss: 0.0978 - lr: 1.0000e-05 - 399ms/epoch - 4ms/step
Epoch 14/500

Epoch 00014: val_loss did not improve from 0.04701
90/90 - 0s - loss: 0.6893 - val_loss: 0.0979 - lr: 1.0000e-05 - 394ms/epoch - 4ms/step
Epoch 15/500

Epoch 00015: val_loss did not improve from 0.04701
90/90 - 0s - loss: 0.6890 - val_loss: 0.0980 - lr: 1.0000e-05 - 397ms/epoch - 4ms/step
Epoch 16/500

Epoch 00016: ReduceLROnPlateau reducing learning rate to 1e-05.

Epoch 00016: val_loss did not improve from 0.04701
90/90 - 0s - loss: 0.6887 - val_loss: 0.0981 - lr: 1.0000e-05 - 397ms/epoch - 4ms/step
Epoch 17/500

Epoch 00017: val_loss did not improve from 0.04701
90/90 - 0s - loss: 0.6884 - val_loss: 0.0983 - lr: 1.0000e-05 - 409ms/epoch - 5ms/step
Epoch 18/500

Epoch 00018: val_loss did not improve from 0.04701
90/90 - 0s - loss: 0.6881 - val_loss: 0.0984 - lr: 1.0000e-05 - 398ms/epoch - 4ms/step
Epoch 19/500

Epoch 00019: val_loss did not improve from 0.04701
90/90 - 0s - loss: 0.6878 - val_loss: 0.0986 - lr: 1.0000e-05 - 388ms/epoch - 4ms/step
Epoch 20/500

Epoch 00020: val_loss did not improve from 0.04701
90/90 - 0s - loss: 0.6875 - val_loss: 0.0987 - lr: 1.0000e-05 - 381ms/epoch - 4ms/step
Epoch 21/500

Epoch 00021: val_loss did not improve from 0.04701
90/90 - 0s - loss: 0.6871 - val_loss: 0.0989 - lr: 1.0000e-05 - 393ms/epoch - 4ms/step
Epoch 22/500

Epoch 00022: val_loss did not improve from 0.04701
90/90 - 0s - loss: 0.6868 - val_loss: 0.0990 - lr: 1.0000e-05 - 418ms/epoch - 5ms/step
Epoch 23/500

Epoch 00023: val_loss did not improve from 0.04701
90/90 - 0s - loss: 0.6865 - val_loss: 0.0992 - lr: 1.0000e-05 - 429ms/epoch - 5ms/step
Epoch 24/500

Epoch 00024: val_loss did not improve from 0.04701
90/90 - 0s - loss: 0.6861 - val_loss: 0.0993 - lr: 1.0000e-05 - 425ms/epoch - 5ms/step
Epoch 25/500

Epoch 00025: val_loss did not improve from 0.04701
90/90 - 0s - loss: 0.6858 - val_loss: 0.0995 - lr: 1.0000e-05 - 398ms/epoch - 4ms/step
Epoch 26/500

Epoch 00026: val_loss did not improve from 0.04701
90/90 - 0s - loss: 0.6855 - val_loss: 0.0997 - lr: 1.0000e-05 - 409ms/epoch - 5ms/step
Epoch 27/500

Epoch 00027: val_loss did not improve from 0.04701
90/90 - 0s - loss: 0.6851 - val_loss: 0.0998 - lr: 1.0000e-05 - 388ms/epoch - 4ms/step
Epoch 28/500

Epoch 00028: val_loss did not improve from 0.04701
90/90 - 0s - loss: 0.6848 - val_loss: 0.1000 - lr: 1.0000e-05 - 388ms/epoch - 4ms/step
Epoch 29/500

Epoch 00029: val_loss did not improve from 0.04701
90/90 - 0s - loss: 0.6844 - val_loss: 0.1002 - lr: 1.0000e-05 - 396ms/epoch - 4ms/step
Epoch 30/500

Epoch 00030: val_loss did not improve from 0.04701
90/90 - 0s - loss: 0.6841 - val_loss: 0.1004 - lr: 1.0000e-05 - 395ms/epoch - 4ms/step
Epoch 31/500

Epoch 00031: val_loss did not improve from 0.04701
90/90 - 0s - loss: 0.6837 - val_loss: 0.1006 - lr: 1.0000e-05 - 393ms/epoch - 4ms/step
Epoch 32/500

Epoch 00032: val_loss did not improve from 0.04701
90/90 - 0s - loss: 0.6834 - val_loss: 0.1007 - lr: 1.0000e-05 - 387ms/epoch - 4ms/step
Epoch 33/500

Epoch 00033: val_loss did not improve from 0.04701
90/90 - 0s - loss: 0.6830 - val_loss: 0.1009 - lr: 1.0000e-05 - 381ms/epoch - 4ms/step
Epoch 34/500

Epoch 00034: val_loss did not improve from 0.04701
90/90 - 0s - loss: 0.6827 - val_loss: 0.1011 - lr: 1.0000e-05 - 389ms/epoch - 4ms/step
Epoch 35/500

Epoch 00035: val_loss did not improve from 0.04701
90/90 - 0s - loss: 0.6823 - val_loss: 0.1013 - lr: 1.0000e-05 - 405ms/epoch - 4ms/step
Epoch 36/500

Epoch 00036: val_loss did not improve from 0.04701
90/90 - 0s - loss: 0.6820 - val_loss: 0.1015 - lr: 1.0000e-05 - 400ms/epoch - 4ms/step
Epoch 37/500

Epoch 00037: val_loss did not improve from 0.04701
90/90 - 0s - loss: 0.6816 - val_loss: 0.1017 - lr: 1.0000e-05 - 394ms/epoch - 4ms/step
Epoch 38/500

Epoch 00038: val_loss did not improve from 0.04701
90/90 - 0s - loss: 0.6812 - val_loss: 0.1019 - lr: 1.0000e-05 - 398ms/epoch - 4ms/step
Epoch 39/500

Epoch 00039: val_loss did not improve from 0.04701
90/90 - 0s - loss: 0.6809 - val_loss: 0.1021 - lr: 1.0000e-05 - 399ms/epoch - 4ms/step
Epoch 40/500

Epoch 00040: val_loss did not improve from 0.04701
90/90 - 0s - loss: 0.6805 - val_loss: 0.1023 - lr: 1.0000e-05 - 400ms/epoch - 4ms/step
Epoch 41/500

Epoch 00041: val_loss did not improve from 0.04701
90/90 - 0s - loss: 0.6802 - val_loss: 0.1026 - lr: 1.0000e-05 - 402ms/epoch - 4ms/step
Epoch 42/500

Epoch 00042: val_loss did not improve from 0.04701
90/90 - 0s - loss: 0.6798 - val_loss: 0.1028 - lr: 1.0000e-05 - 384ms/epoch - 4ms/step
Epoch 43/500

Epoch 00043: val_loss did not improve from 0.04701
90/90 - 0s - loss: 0.6794 - val_loss: 0.1030 - lr: 1.0000e-05 - 383ms/epoch - 4ms/step
Epoch 44/500

Epoch 00044: val_loss did not improve from 0.04701
90/90 - 0s - loss: 0.6791 - val_loss: 0.1032 - lr: 1.0000e-05 - 409ms/epoch - 5ms/step
Epoch 45/500

Epoch 00045: val_loss did not improve from 0.04701
90/90 - 0s - loss: 0.6787 - val_loss: 0.1034 - lr: 1.0000e-05 - 394ms/epoch - 4ms/step
Epoch 46/500

Epoch 00046: val_loss did not improve from 0.04701
90/90 - 0s - loss: 0.6784 - val_loss: 0.1036 - lr: 1.0000e-05 - 401ms/epoch - 4ms/step
Epoch 47/500

Epoch 00047: val_loss did not improve from 0.04701
90/90 - 0s - loss: 0.6780 - val_loss: 0.1039 - lr: 1.0000e-05 - 392ms/epoch - 4ms/step
Epoch 48/500

Epoch 00048: val_loss did not improve from 0.04701
90/90 - 0s - loss: 0.6777 - val_loss: 0.1041 - lr: 1.0000e-05 - 380ms/epoch - 4ms/step
Epoch 49/500

Epoch 00049: val_loss did not improve from 0.04701
90/90 - 0s - loss: 0.6773 - val_loss: 0.1043 - lr: 1.0000e-05 - 391ms/epoch - 4ms/step
Epoch 50/500

Epoch 00050: val_loss did not improve from 0.04701
90/90 - 0s - loss: 0.6770 - val_loss: 0.1045 - lr: 1.0000e-05 - 388ms/epoch - 4ms/step
Epoch 51/500

Epoch 00051: val_loss did not improve from 0.04701
90/90 - 0s - loss: 0.6766 - val_loss: 0.1048 - lr: 1.0000e-05 - 400ms/epoch - 4ms/step
Epoch 00051: early stopping
SMA
Prediction vs Close:		54.85% Accuracy
Prediction vs Prediction:	47.01% Accuracy
MSE:	 27.593274833467863 
RMSE:	 5.252930118844897 
MAPE:	 4.117405060238624

EMA
Prediction vs Close:		54.85% Accuracy
Prediction vs Prediction:	49.63% Accuracy
MSE:	 36.00024912834101 
RMSE:	 6.000020760659167 
MAPE:	 4.70426166066095

WMA
Prediction vs Close:		54.48% Accuracy
Prediction vs Prediction:	46.27% Accuracy
MSE:	 41.43257052147296 
RMSE:	 6.4368136932393005 
MAPE:	 5.073793611210466

DEMA
Prediction vs Close:		52.24% Accuracy
Prediction vs Prediction:	45.15% Accuracy
MSE:	 181.24518187853474 
RMSE:	 13.462733076108087 
MAPE:	 12.35792879577932

KAMA
Prediction vs Close:		56.72% Accuracy
Prediction vs Prediction:	46.27% Accuracy
MSE:	 27.433326731723586 
RMSE:	 5.237683336335214 
MAPE:	 4.154161853007022

MIDPOINT
Prediction vs Close:		51.87% Accuracy
Prediction vs Prediction:	44.78% Accuracy
MSE:	 17.391761137690516 
RMSE:	 4.170343047962663 
MAPE:	 3.393694867236689

T3
Prediction vs Close:		53.73% Accuracy
Prediction vs Prediction:	47.76% Accuracy
MSE:	 65.97403618729489 
RMSE:	 8.122440285240321 
MAPE:	 6.528054740759965

TEMA
Prediction vs Close:		51.87% Accuracy
Prediction vs Prediction:	47.76% Accuracy
MSE:	 23.78118431614061 
RMSE:	 4.8765955661855545 
MAPE:	 4.3174203405459135
Runtime: mins: 41.84255483309998

Architecture Used

In [ ]:
from google.colab import files
import cv2
uploaded = files.upload()
Upload widget is only available when the cell has been executed in the current browser session. Please rerun this cell to enable.
In [ ]:
img = cv2.imread('Experiment5.png')
plt.figure(figsize=(20,10))
plt.axis("off")
plt.title('LSTM Architecture '+imgfile,fontsize=18)
plt.imshow(img)

Model Plots

In [85]:
with open('simulation8_data.json') as json_file:
    simulation8 = json.load(json_file)
fileimg = 'Experiment8'
In [86]:
for i in range(len(list(simulation8.keys()))):
  SIM = list(simulation8.keys())[i]
  plot_train(simulation8,SIM)
  plot_test(simulation8,SIM)
----- Train RMSE for SMA ----- 19.309560563509795
----- Train_MSE_LSTM for SMA ----- 372.8591291558527
----- Train MAE LSTM for SMA ----- 19.28235636135139
----- Test RMSE for SMA----- 5.252930118844897
----- Test_MSE_LSTM for SMA----- 27.593274833467863
----- Test_MAE_LSTM for SMA----- 4.117405060238624
----- Train RMSE for EMA ----- 23.393549647032383
----- Train_MSE_LSTM for EMA ----- 547.258165088169
----- Train MAE LSTM for EMA ----- 23.379056536325134
----- Test RMSE for EMA----- 6.000020760659167
----- Test_MSE_LSTM for EMA----- 36.00024912834101
----- Test_MAE_LSTM for EMA----- 4.70426166066095
----- Train RMSE for WMA ----- 21.96266675676863
----- Train_MSE_LSTM for WMA ----- 482.35873106886993
----- Train MAE LSTM for WMA ----- 21.93508726535457
----- Test RMSE for WMA----- 6.4368136932393005
----- Test_MSE_LSTM for WMA----- 41.43257052147296
----- Test_MAE_LSTM for WMA----- 5.073793611210466
----- Train RMSE for DEMA ----- 25.637680841649903
----- Train_MSE_LSTM for DEMA ----- 657.2906789383026
----- Train MAE LSTM for DEMA ----- 25.614362844146125
----- Test RMSE for DEMA----- 13.462733076108087
----- Test_MSE_LSTM for DEMA----- 181.24518187853474
----- Test_MAE_LSTM for DEMA----- 12.35792879577932
----- Train RMSE for KAMA ----- 15.645649078988193
----- Train_MSE_LSTM for KAMA ----- 244.78633510284408
----- Train MAE LSTM for KAMA ----- 15.537092394167834
----- Test RMSE for KAMA----- 5.237683336335214
----- Test_MSE_LSTM for KAMA----- 27.433326731723586
----- Test_MAE_LSTM for KAMA----- 4.154161853007022
----- Train RMSE for MIDPOINT ----- 15.708219768951658
----- Train_MSE_LSTM for MIDPOINT ----- 246.74816830968368
----- Train MAE LSTM for MIDPOINT ----- 15.633552206624852
----- Test RMSE for MIDPOINT----- 4.170343047962663
----- Test_MSE_LSTM for MIDPOINT----- 17.391761137690516
----- Test_MAE_LSTM for MIDPOINT----- 3.393694867236689
----- Train RMSE for T3 ----- 23.305790930247145
----- Train_MSE_LSTM for T3 ----- 543.15989088439
----- Train MAE LSTM for T3 ----- 23.296647801257595
----- Test RMSE for T3----- 8.122440285240321
----- Test_MSE_LSTM for T3----- 65.97403618729489
----- Test_MAE_LSTM for T3----- 6.528054740759965
----- Train RMSE for TEMA ----- 19.052881062333018
----- Train_MSE_LSTM for TEMA ----- 363.0122767754081
----- Train MAE LSTM for TEMA ----- 19.014995029657193
----- Test RMSE for TEMA----- 4.8765955661855545
----- Test_MSE_LSTM for TEMA----- 23.78118431614061
----- Test_MAE_LSTM for TEMA----- 4.3174203405459135

List of RMSE, MSE & MAE scores for Test data

In [ ]:
import json
with open('simulation1_data.json') as json_file:
    simulation1 = json.load(json_file)

with open('simulation2_data.json') as json_file:
    simulation2 = json.load(json_file)

with open('simulation3_data.json') as json_file:
    simulation3 = json.load(json_file)

with open('simulation4_data.json') as json_file:
    simulation4 = json.load(json_file)

with open('simulation5_data.json') as json_file:
    simulation5 = json.load(json_file)

with open('simulation6_data.json') as json_file:
    simulation6 = json.load(json_file)

with open('simulation7_data.json') as json_file:
    simulation7 = json.load(json_file)

with open('simulation8_data.json') as json_file:
    simulation8 = json.load(json_file)
In [ ]:
text = 'Stock with Google Trends '
simulations = [simulation1,simulation2,simulation3,simulation4,simulation5,simulation6,simulation7,simulation8]
for i,simulation in enumerate(simulations):
  for ma in simulation.keys():
    print(text+'Experiment ',i+1,' for MA :',ma,'the MSE  is: ',simulation[ma]['final']['mse'])
    print(text+'Experiment ',i+1,' for MA :',ma,'the RMSE is: ',simulation[ma]['final']['rmse'])
    print(text+'Experiment ',i+1,' for MA :',ma,'the MAE is: ',simulation[ma]['final']['mae'])
Stock with Google Trends Experiment  1  for MA : SMA the MSE  is:  216.26215770312788
Stock with Google Trends Experiment  1  for MA : SMA the RMSE is:  14.705854538350632
Stock with Google Trends Experiment  1  for MA : SMA the MAE is:  11.92367463073216
Stock with Google Trends Experiment  1  for MA : EMA the MSE  is:  122.96033735804009
Stock with Google Trends Experiment  1  for MA : EMA the RMSE is:  11.08874823224155
Stock with Google Trends Experiment  1  for MA : EMA the MAE is:  9.251696034357076
Stock with Google Trends Experiment  1  for MA : WMA the MSE  is:  56.80196950717294
Stock with Google Trends Experiment  1  for MA : WMA the RMSE is:  7.5367081346681415
Stock with Google Trends Experiment  1  for MA : WMA the MAE is:  5.956122993340066
Stock with Google Trends Experiment  1  for MA : DEMA the MSE  is:  120.51656357671082
Stock with Google Trends Experiment  1  for MA : DEMA the RMSE is:  10.978003624371365
Stock with Google Trends Experiment  1  for MA : DEMA the MAE is:  9.343426819843298
Stock with Google Trends Experiment  1  for MA : KAMA the MSE  is:  80.73205723036311
Stock with Google Trends Experiment  1  for MA : KAMA the RMSE is:  8.985101959931402
Stock with Google Trends Experiment  1  for MA : KAMA the MAE is:  7.079003376879244
Stock with Google Trends Experiment  1  for MA : MIDPOINT the MSE  is:  75.80130571921515
Stock with Google Trends Experiment  1  for MA : MIDPOINT the RMSE is:  8.70639453041356
Stock with Google Trends Experiment  1  for MA : MIDPOINT the MAE is:  7.130945881105426
Stock with Google Trends Experiment  1  for MA : T3 the MSE  is:  46.36148283886003
Stock with Google Trends Experiment  1  for MA : T3 the RMSE is:  6.808926702415001
Stock with Google Trends Experiment  1  for MA : T3 the MAE is:  5.596706034896943
Stock with Google Trends Experiment  1  for MA : TEMA the MSE  is:  32.02006831549358
Stock with Google Trends Experiment  1  for MA : TEMA the RMSE is:  5.658627776722337
Stock with Google Trends Experiment  1  for MA : TEMA the MAE is:  4.840005256997922
Stock with Google Trends Experiment  2  for MA : SMA the MSE  is:  136.8787627980392
Stock with Google Trends Experiment  2  for MA : SMA the RMSE is:  11.699519767838302
Stock with Google Trends Experiment  2  for MA : SMA the MAE is:  9.782684450137419
Stock with Google Trends Experiment  2  for MA : EMA the MSE  is:  325.2876788257106
Stock with Google Trends Experiment  2  for MA : EMA the RMSE is:  18.035733387520192
Stock with Google Trends Experiment  2  for MA : EMA the MAE is:  15.58326491461623
Stock with Google Trends Experiment  2  for MA : WMA the MSE  is:  142.26654445774975
Stock with Google Trends Experiment  2  for MA : WMA the RMSE is:  11.927554001460221
Stock with Google Trends Experiment  2  for MA : WMA the MAE is:  9.787176512242624
Stock with Google Trends Experiment  2  for MA : DEMA the MSE  is:  171.26734938505615
Stock with Google Trends Experiment  2  for MA : DEMA the RMSE is:  13.086915197442679
Stock with Google Trends Experiment  2  for MA : DEMA the MAE is:  11.821213958102536
Stock with Google Trends Experiment  2  for MA : KAMA the MSE  is:  52.24625253706662
Stock with Google Trends Experiment  2  for MA : KAMA the RMSE is:  7.228156925321048
Stock with Google Trends Experiment  2  for MA : KAMA the MAE is:  5.787764162926424
Stock with Google Trends Experiment  2  for MA : MIDPOINT the MSE  is:  51.03620830548878
Stock with Google Trends Experiment  2  for MA : MIDPOINT the RMSE is:  7.143963067197981
Stock with Google Trends Experiment  2  for MA : MIDPOINT the MAE is:  5.7689786092745114
Stock with Google Trends Experiment  2  for MA : T3 the MSE  is:  131.82692716984482
Stock with Google Trends Experiment  2  for MA : T3 the RMSE is:  11.481590794391028
Stock with Google Trends Experiment  2  for MA : T3 the MAE is:  9.148826908925223
Stock with Google Trends Experiment  2  for MA : TEMA the MSE  is:  103.01748696218638
Stock with Google Trends Experiment  2  for MA : TEMA the RMSE is:  10.149753049320283
Stock with Google Trends Experiment  2  for MA : TEMA the MAE is:  9.012271728014667
Stock with Google Trends Experiment  3  for MA : SMA the MSE  is:  60.72793966434877
Stock with Google Trends Experiment  3  for MA : SMA the RMSE is:  7.792813334370892
Stock with Google Trends Experiment  3  for MA : SMA the MAE is:  6.245110563496496
Stock with Google Trends Experiment  3  for MA : EMA the MSE  is:  24.31424046870049
Stock with Google Trends Experiment  3  for MA : EMA the RMSE is:  4.930947218202654
Stock with Google Trends Experiment  3  for MA : EMA the MAE is:  4.072073008160127
Stock with Google Trends Experiment  3  for MA : WMA the MSE  is:  68.15170187847146
Stock with Google Trends Experiment  3  for MA : WMA the RMSE is:  8.255404404296101
Stock with Google Trends Experiment  3  for MA : WMA the MAE is:  6.806281257852447
Stock with Google Trends Experiment  3  for MA : DEMA the MSE  is:  43.922177447330554
Stock with Google Trends Experiment  3  for MA : DEMA the RMSE is:  6.627380888958364
Stock with Google Trends Experiment  3  for MA : DEMA the MAE is:  5.414540694927783
Stock with Google Trends Experiment  3  for MA : KAMA the MSE  is:  23.99643218488633
Stock with Google Trends Experiment  3  for MA : KAMA the RMSE is:  4.8986153334270215
Stock with Google Trends Experiment  3  for MA : KAMA the MAE is:  3.8674202618000764
Stock with Google Trends Experiment  3  for MA : MIDPOINT the MSE  is:  28.7191654730941
Stock with Google Trends Experiment  3  for MA : MIDPOINT the RMSE is:  5.359026541555295
Stock with Google Trends Experiment  3  for MA : MIDPOINT the MAE is:  4.42030651732032
Stock with Google Trends Experiment  3  for MA : T3 the MSE  is:  57.05690799124099
Stock with Google Trends Experiment  3  for MA : T3 the RMSE is:  7.553602318843705
Stock with Google Trends Experiment  3  for MA : T3 the MAE is:  6.064989593585796
Stock with Google Trends Experiment  3  for MA : TEMA the MSE  is:  20.964688340236016
Stock with Google Trends Experiment  3  for MA : TEMA the RMSE is:  4.578721256009806
Stock with Google Trends Experiment  3  for MA : TEMA the MAE is:  3.7212897315589664
Stock with Google Trends Experiment  4  for MA : SMA the MSE  is:  23.31830336349202
Stock with Google Trends Experiment  4  for MA : SMA the RMSE is:  4.828902915103183
Stock with Google Trends Experiment  4  for MA : SMA the MAE is:  3.806885992834059
Stock with Google Trends Experiment  4  for MA : EMA the MSE  is:  31.4391560756623
Stock with Google Trends Experiment  4  for MA : EMA the RMSE is:  5.607063052584865
Stock with Google Trends Experiment  4  for MA : EMA the MAE is:  4.398444723456604
Stock with Google Trends Experiment  4  for MA : WMA the MSE  is:  48.85272948439791
Stock with Google Trends Experiment  4  for MA : WMA the RMSE is:  6.989472761546318
Stock with Google Trends Experiment  4  for MA : WMA the MAE is:  5.616901258925532
Stock with Google Trends Experiment  4  for MA : DEMA the MSE  is:  143.4471215002686
Stock with Google Trends Experiment  4  for MA : DEMA the RMSE is:  11.976941241413376
Stock with Google Trends Experiment  4  for MA : DEMA the MAE is:  10.686872819228396
Stock with Google Trends Experiment  4  for MA : KAMA the MSE  is:  23.251670970583447
Stock with Google Trends Experiment  4  for MA : KAMA the RMSE is:  4.821998648961181
Stock with Google Trends Experiment  4  for MA : KAMA the MAE is:  3.833042253232743
Stock with Google Trends Experiment  4  for MA : MIDPOINT the MSE  is:  16.39872837560197
Stock with Google Trends Experiment  4  for MA : MIDPOINT the RMSE is:  4.0495343405880595
Stock with Google Trends Experiment  4  for MA : MIDPOINT the MAE is:  3.299619771312048
Stock with Google Trends Experiment  4  for MA : T3 the MSE  is:  85.43536380908036
Stock with Google Trends Experiment  4  for MA : T3 the RMSE is:  9.24312521872772
Stock with Google Trends Experiment  4  for MA : T3 the MAE is:  7.5496901284439915
Stock with Google Trends Experiment  4  for MA : TEMA the MSE  is:  17.749245841001986
Stock with Google Trends Experiment  4  for MA : TEMA the RMSE is:  4.21298538343085
Stock with Google Trends Experiment  4  for MA : TEMA the MAE is:  3.636908590169574
Stock with Google Trends Experiment  5  for MA : SMA the MSE  is:  38.984836670221576
Stock with Google Trends Experiment  5  for MA : SMA the RMSE is:  6.24378384236847
Stock with Google Trends Experiment  5  for MA : SMA the MAE is:  5.10393861237253
Stock with Google Trends Experiment  5  for MA : EMA the MSE  is:  124.25764226707467
Stock with Google Trends Experiment  5  for MA : EMA the RMSE is:  11.1470912020614
Stock with Google Trends Experiment  5  for MA : EMA the MAE is:  9.17724208981177
Stock with Google Trends Experiment  5  for MA : WMA the MSE  is:  38.21405797999008
Stock with Google Trends Experiment  5  for MA : WMA the RMSE is:  6.1817520154071275
Stock with Google Trends Experiment  5  for MA : WMA the MAE is:  5.0592557753421294
Stock with Google Trends Experiment  5  for MA : DEMA the MSE  is:  272.80520892035037
Stock with Google Trends Experiment  5  for MA : DEMA the RMSE is:  16.51681594376926
Stock with Google Trends Experiment  5  for MA : DEMA the MAE is:  15.690440427295842
Stock with Google Trends Experiment  5  for MA : KAMA the MSE  is:  39.855043502438484
Stock with Google Trends Experiment  5  for MA : KAMA the RMSE is:  6.313085101789654
Stock with Google Trends Experiment  5  for MA : KAMA the MAE is:  4.932118299391016
Stock with Google Trends Experiment  5  for MA : MIDPOINT the MSE  is:  247.8059617737088
Stock with Google Trends Experiment  5  for MA : MIDPOINT the RMSE is:  15.741853822650901
Stock with Google Trends Experiment  5  for MA : MIDPOINT the MAE is:  13.137429929578502
Stock with Google Trends Experiment  5  for MA : T3 the MSE  is:  210.84512917819418
Stock with Google Trends Experiment  5  for MA : T3 the RMSE is:  14.520507194247527
Stock with Google Trends Experiment  5  for MA : T3 the MAE is:  11.877711162377306
Stock with Google Trends Experiment  5  for MA : TEMA the MSE  is:  47.81525777472662
Stock with Google Trends Experiment  5  for MA : TEMA the RMSE is:  6.9148577552055706
Stock with Google Trends Experiment  5  for MA : TEMA the MAE is:  5.805355375153735
Stock with Google Trends Experiment  6  for MA : SMA the MSE  is:  71.66891031373025
Stock with Google Trends Experiment  6  for MA : SMA the RMSE is:  8.465749247038342
Stock with Google Trends Experiment  6  for MA : SMA the MAE is:  6.880610177712922
Stock with Google Trends Experiment  6  for MA : EMA the MSE  is:  67.24711347278334
Stock with Google Trends Experiment  6  for MA : EMA the RMSE is:  8.200433736869249
Stock with Google Trends Experiment  6  for MA : EMA the MAE is:  6.781803215137433
Stock with Google Trends Experiment  6  for MA : WMA the MSE  is:  58.550384022767716
Stock with Google Trends Experiment  6  for MA : WMA the RMSE is:  7.6518222681115455
Stock with Google Trends Experiment  6  for MA : WMA the MAE is:  6.1413991074844
Stock with Google Trends Experiment  6  for MA : DEMA the MSE  is:  127.72881036471918
Stock with Google Trends Experiment  6  for MA : DEMA the RMSE is:  11.301717142307146
Stock with Google Trends Experiment  6  for MA : DEMA the MAE is:  10.306940424406019
Stock with Google Trends Experiment  6  for MA : KAMA the MSE  is:  43.855468679122
Stock with Google Trends Experiment  6  for MA : KAMA the RMSE is:  6.622346161227303
Stock with Google Trends Experiment  6  for MA : KAMA the MAE is:  5.4751276749367985
Stock with Google Trends Experiment  6  for MA : MIDPOINT the MSE  is:  67.44416314351042
Stock with Google Trends Experiment  6  for MA : MIDPOINT the RMSE is:  8.212439536673035
Stock with Google Trends Experiment  6  for MA : MIDPOINT the MAE is:  6.768235104271493
Stock with Google Trends Experiment  6  for MA : T3 the MSE  is:  154.873959597027
Stock with Google Trends Experiment  6  for MA : T3 the RMSE is:  12.444836664136135
Stock with Google Trends Experiment  6  for MA : T3 the MAE is:  10.329006454112236
Stock with Google Trends Experiment  6  for MA : TEMA the MSE  is:  163.5522149093863
Stock with Google Trends Experiment  6  for MA : TEMA the RMSE is:  12.788753454085597
Stock with Google Trends Experiment  6  for MA : TEMA the MAE is:  11.455728191899736
Stock with Google Trends Experiment  7  for MA : SMA the MSE  is:  37.148636819682245
Stock with Google Trends Experiment  7  for MA : SMA the RMSE is:  6.094968155756209
Stock with Google Trends Experiment  7  for MA : SMA the MAE is:  5.090179331518223
Stock with Google Trends Experiment  7  for MA : EMA the MSE  is:  31.582654654902484
Stock with Google Trends Experiment  7  for MA : EMA the RMSE is:  5.619844718041815
Stock with Google Trends Experiment  7  for MA : EMA the MAE is:  4.507182634072088
Stock with Google Trends Experiment  7  for MA : WMA the MSE  is:  65.00101872981564
Stock with Google Trends Experiment  7  for MA : WMA the RMSE is:  8.062320926992156
Stock with Google Trends Experiment  7  for MA : WMA the MAE is:  6.705711592581163
Stock with Google Trends Experiment  7  for MA : DEMA the MSE  is:  35.269002386685244
Stock with Google Trends Experiment  7  for MA : DEMA the RMSE is:  5.938771117553298
Stock with Google Trends Experiment  7  for MA : DEMA the MAE is:  4.62878838931535
Stock with Google Trends Experiment  7  for MA : KAMA the MSE  is:  62.25156682504816
Stock with Google Trends Experiment  7  for MA : KAMA the RMSE is:  7.8899662119078915
Stock with Google Trends Experiment  7  for MA : KAMA the MAE is:  6.222956717810362
Stock with Google Trends Experiment  7  for MA : MIDPOINT the MSE  is:  84.47531990876952
Stock with Google Trends Experiment  7  for MA : MIDPOINT the RMSE is:  9.191045637399997
Stock with Google Trends Experiment  7  for MA : MIDPOINT the MAE is:  7.890202393641488
Stock with Google Trends Experiment  7  for MA : T3 the MSE  is:  92.6070950307407
Stock with Google Trends Experiment  7  for MA : T3 the RMSE is:  9.623258025780078
Stock with Google Trends Experiment  7  for MA : T3 the MAE is:  8.212466968891306
Stock with Google Trends Experiment  7  for MA : TEMA the MSE  is:  50.229608880159034
Stock with Google Trends Experiment  7  for MA : TEMA the RMSE is:  7.087285014740061
Stock with Google Trends Experiment  7  for MA : TEMA the MAE is:  6.2877236827531835
Stock with Google Trends Experiment  8  for MA : SMA the MSE  is:  27.593274833467863
Stock with Google Trends Experiment  8  for MA : SMA the RMSE is:  5.252930118844897
Stock with Google Trends Experiment  8  for MA : SMA the MAE is:  4.117405060238624
Stock with Google Trends Experiment  8  for MA : EMA the MSE  is:  36.00024912834101
Stock with Google Trends Experiment  8  for MA : EMA the RMSE is:  6.000020760659167
Stock with Google Trends Experiment  8  for MA : EMA the MAE is:  4.70426166066095
Stock with Google Trends Experiment  8  for MA : WMA the MSE  is:  41.43257052147296
Stock with Google Trends Experiment  8  for MA : WMA the RMSE is:  6.4368136932393005
Stock with Google Trends Experiment  8  for MA : WMA the MAE is:  5.073793611210466
Stock with Google Trends Experiment  8  for MA : DEMA the MSE  is:  181.24518187853474
Stock with Google Trends Experiment  8  for MA : DEMA the RMSE is:  13.462733076108087
Stock with Google Trends Experiment  8  for MA : DEMA the MAE is:  12.35792879577932
Stock with Google Trends Experiment  8  for MA : KAMA the MSE  is:  27.433326731723586
Stock with Google Trends Experiment  8  for MA : KAMA the RMSE is:  5.237683336335214
Stock with Google Trends Experiment  8  for MA : KAMA the MAE is:  4.154161853007022
Stock with Google Trends Experiment  8  for MA : MIDPOINT the MSE  is:  17.391761137690516
Stock with Google Trends Experiment  8  for MA : MIDPOINT the RMSE is:  4.170343047962663
Stock with Google Trends Experiment  8  for MA : MIDPOINT the MAE is:  3.393694867236689
Stock with Google Trends Experiment  8  for MA : T3 the MSE  is:  65.97403618729489
Stock with Google Trends Experiment  8  for MA : T3 the RMSE is:  8.122440285240321
Stock with Google Trends Experiment  8  for MA : T3 the MAE is:  6.528054740759965
Stock with Google Trends Experiment  8  for MA : TEMA the MSE  is:  23.78118431614061
Stock with Google Trends Experiment  8  for MA : TEMA the RMSE is:  4.8765955661855545
Stock with Google Trends Experiment  8  for MA : TEMA the MAE is:  4.3174203405459135
In [ ]:
text = 'Stock with Google Trends '
simulations = [simulation1,simulation2,simulation3,simulation4,simulation5,simulation6,simulation7,simulation8]
for i,simulation in enumerate(simulations):
  for ma in simulation.keys():
    # print(text+'Experiment ',i+1,' for MA :',ma,'the MSE  is: ',simulation[ma]['final']['mse'])
    print(text+'Experiment ',i+1,' for MA :',ma,'the RMSE is: ',simulation[ma]['final']['rmse'])
    # print(text+'Experiment ',i+1,' for MA :',ma,'the MAE is: ',simulation[ma]['final']['mae'])
  for ma in simulation.keys():
    print(text+'Experiment ',i+1,' for MA :',ma,'the MSE  is: ',simulation[ma]['final']['mse'])
    # print(text+'Experiment ',i+1,' for MA :',ma,'the RMSE is: ',simulation[ma]['final']['rmse'])
    # print(text+'Experiment ',i+1,' for MA :',ma,'the MAE is: ',simulation[ma]['final']['mae'])
  for ma in simulation.keys():
    # print(text+'Experiment ',i+1,' for MA :',ma,'the MSE  is: ',simulation[ma]['final']['mse'])
    # print(text+'Experiment ',i+1,' for MA :',ma,'the RMSE is: ',simulation[ma]['final']['rmse'])
    print(text+'Experiment ',i+1,' for MA :',ma,'the MAE is: ',simulation[ma]['final']['mae'])
Stock with Google Trends Experiment  1  for MA : SMA the RMSE is:  14.705854538350632
Stock with Google Trends Experiment  1  for MA : EMA the RMSE is:  11.08874823224155
Stock with Google Trends Experiment  1  for MA : WMA the RMSE is:  7.5367081346681415
Stock with Google Trends Experiment  1  for MA : DEMA the RMSE is:  10.978003624371365
Stock with Google Trends Experiment  1  for MA : KAMA the RMSE is:  8.985101959931402
Stock with Google Trends Experiment  1  for MA : MIDPOINT the RMSE is:  8.70639453041356
Stock with Google Trends Experiment  1  for MA : T3 the RMSE is:  6.808926702415001
Stock with Google Trends Experiment  1  for MA : TEMA the RMSE is:  5.658627776722337
Stock with Google Trends Experiment  1  for MA : SMA the MSE  is:  216.26215770312788
Stock with Google Trends Experiment  1  for MA : EMA the MSE  is:  122.96033735804009
Stock with Google Trends Experiment  1  for MA : WMA the MSE  is:  56.80196950717294
Stock with Google Trends Experiment  1  for MA : DEMA the MSE  is:  120.51656357671082
Stock with Google Trends Experiment  1  for MA : KAMA the MSE  is:  80.73205723036311
Stock with Google Trends Experiment  1  for MA : MIDPOINT the MSE  is:  75.80130571921515
Stock with Google Trends Experiment  1  for MA : T3 the MSE  is:  46.36148283886003
Stock with Google Trends Experiment  1  for MA : TEMA the MSE  is:  32.02006831549358
Stock with Google Trends Experiment  1  for MA : SMA the MAE is:  11.92367463073216
Stock with Google Trends Experiment  1  for MA : EMA the MAE is:  9.251696034357076
Stock with Google Trends Experiment  1  for MA : WMA the MAE is:  5.956122993340066
Stock with Google Trends Experiment  1  for MA : DEMA the MAE is:  9.343426819843298
Stock with Google Trends Experiment  1  for MA : KAMA the MAE is:  7.079003376879244
Stock with Google Trends Experiment  1  for MA : MIDPOINT the MAE is:  7.130945881105426
Stock with Google Trends Experiment  1  for MA : T3 the MAE is:  5.596706034896943
Stock with Google Trends Experiment  1  for MA : TEMA the MAE is:  4.840005256997922
Stock with Google Trends Experiment  2  for MA : SMA the RMSE is:  11.699519767838302
Stock with Google Trends Experiment  2  for MA : EMA the RMSE is:  18.035733387520192
Stock with Google Trends Experiment  2  for MA : WMA the RMSE is:  11.927554001460221
Stock with Google Trends Experiment  2  for MA : DEMA the RMSE is:  13.086915197442679
Stock with Google Trends Experiment  2  for MA : KAMA the RMSE is:  7.228156925321048
Stock with Google Trends Experiment  2  for MA : MIDPOINT the RMSE is:  7.143963067197981
Stock with Google Trends Experiment  2  for MA : T3 the RMSE is:  11.481590794391028
Stock with Google Trends Experiment  2  for MA : TEMA the RMSE is:  10.149753049320283
Stock with Google Trends Experiment  2  for MA : SMA the MSE  is:  136.8787627980392
Stock with Google Trends Experiment  2  for MA : EMA the MSE  is:  325.2876788257106
Stock with Google Trends Experiment  2  for MA : WMA the MSE  is:  142.26654445774975
Stock with Google Trends Experiment  2  for MA : DEMA the MSE  is:  171.26734938505615
Stock with Google Trends Experiment  2  for MA : KAMA the MSE  is:  52.24625253706662
Stock with Google Trends Experiment  2  for MA : MIDPOINT the MSE  is:  51.03620830548878
Stock with Google Trends Experiment  2  for MA : T3 the MSE  is:  131.82692716984482
Stock with Google Trends Experiment  2  for MA : TEMA the MSE  is:  103.01748696218638
Stock with Google Trends Experiment  2  for MA : SMA the MAE is:  9.782684450137419
Stock with Google Trends Experiment  2  for MA : EMA the MAE is:  15.58326491461623
Stock with Google Trends Experiment  2  for MA : WMA the MAE is:  9.787176512242624
Stock with Google Trends Experiment  2  for MA : DEMA the MAE is:  11.821213958102536
Stock with Google Trends Experiment  2  for MA : KAMA the MAE is:  5.787764162926424
Stock with Google Trends Experiment  2  for MA : MIDPOINT the MAE is:  5.7689786092745114
Stock with Google Trends Experiment  2  for MA : T3 the MAE is:  9.148826908925223
Stock with Google Trends Experiment  2  for MA : TEMA the MAE is:  9.012271728014667
Stock with Google Trends Experiment  3  for MA : SMA the RMSE is:  7.792813334370892
Stock with Google Trends Experiment  3  for MA : EMA the RMSE is:  4.930947218202654
Stock with Google Trends Experiment  3  for MA : WMA the RMSE is:  8.255404404296101
Stock with Google Trends Experiment  3  for MA : DEMA the RMSE is:  6.627380888958364
Stock with Google Trends Experiment  3  for MA : KAMA the RMSE is:  4.8986153334270215
Stock with Google Trends Experiment  3  for MA : MIDPOINT the RMSE is:  5.359026541555295
Stock with Google Trends Experiment  3  for MA : T3 the RMSE is:  7.553602318843705
Stock with Google Trends Experiment  3  for MA : TEMA the RMSE is:  4.578721256009806
Stock with Google Trends Experiment  3  for MA : SMA the MSE  is:  60.72793966434877
Stock with Google Trends Experiment  3  for MA : EMA the MSE  is:  24.31424046870049
Stock with Google Trends Experiment  3  for MA : WMA the MSE  is:  68.15170187847146
Stock with Google Trends Experiment  3  for MA : DEMA the MSE  is:  43.922177447330554
Stock with Google Trends Experiment  3  for MA : KAMA the MSE  is:  23.99643218488633
Stock with Google Trends Experiment  3  for MA : MIDPOINT the MSE  is:  28.7191654730941
Stock with Google Trends Experiment  3  for MA : T3 the MSE  is:  57.05690799124099
Stock with Google Trends Experiment  3  for MA : TEMA the MSE  is:  20.964688340236016
Stock with Google Trends Experiment  3  for MA : SMA the MAE is:  6.245110563496496
Stock with Google Trends Experiment  3  for MA : EMA the MAE is:  4.072073008160127
Stock with Google Trends Experiment  3  for MA : WMA the MAE is:  6.806281257852447
Stock with Google Trends Experiment  3  for MA : DEMA the MAE is:  5.414540694927783
Stock with Google Trends Experiment  3  for MA : KAMA the MAE is:  3.8674202618000764
Stock with Google Trends Experiment  3  for MA : MIDPOINT the MAE is:  4.42030651732032
Stock with Google Trends Experiment  3  for MA : T3 the MAE is:  6.064989593585796
Stock with Google Trends Experiment  3  for MA : TEMA the MAE is:  3.7212897315589664
Stock with Google Trends Experiment  4  for MA : SMA the RMSE is:  4.828902915103183
Stock with Google Trends Experiment  4  for MA : EMA the RMSE is:  5.607063052584865
Stock with Google Trends Experiment  4  for MA : WMA the RMSE is:  6.989472761546318
Stock with Google Trends Experiment  4  for MA : DEMA the RMSE is:  11.976941241413376
Stock with Google Trends Experiment  4  for MA : KAMA the RMSE is:  4.821998648961181
Stock with Google Trends Experiment  4  for MA : MIDPOINT the RMSE is:  4.0495343405880595
Stock with Google Trends Experiment  4  for MA : T3 the RMSE is:  9.24312521872772
Stock with Google Trends Experiment  4  for MA : TEMA the RMSE is:  4.21298538343085
Stock with Google Trends Experiment  4  for MA : SMA the MSE  is:  23.31830336349202
Stock with Google Trends Experiment  4  for MA : EMA the MSE  is:  31.4391560756623
Stock with Google Trends Experiment  4  for MA : WMA the MSE  is:  48.85272948439791
Stock with Google Trends Experiment  4  for MA : DEMA the MSE  is:  143.4471215002686
Stock with Google Trends Experiment  4  for MA : KAMA the MSE  is:  23.251670970583447
Stock with Google Trends Experiment  4  for MA : MIDPOINT the MSE  is:  16.39872837560197
Stock with Google Trends Experiment  4  for MA : T3 the MSE  is:  85.43536380908036
Stock with Google Trends Experiment  4  for MA : TEMA the MSE  is:  17.749245841001986
Stock with Google Trends Experiment  4  for MA : SMA the MAE is:  3.806885992834059
Stock with Google Trends Experiment  4  for MA : EMA the MAE is:  4.398444723456604
Stock with Google Trends Experiment  4  for MA : WMA the MAE is:  5.616901258925532
Stock with Google Trends Experiment  4  for MA : DEMA the MAE is:  10.686872819228396
Stock with Google Trends Experiment  4  for MA : KAMA the MAE is:  3.833042253232743
Stock with Google Trends Experiment  4  for MA : MIDPOINT the MAE is:  3.299619771312048
Stock with Google Trends Experiment  4  for MA : T3 the MAE is:  7.5496901284439915
Stock with Google Trends Experiment  4  for MA : TEMA the MAE is:  3.636908590169574
Stock with Google Trends Experiment  5  for MA : SMA the RMSE is:  6.24378384236847
Stock with Google Trends Experiment  5  for MA : EMA the RMSE is:  11.1470912020614
Stock with Google Trends Experiment  5  for MA : WMA the RMSE is:  6.1817520154071275
Stock with Google Trends Experiment  5  for MA : DEMA the RMSE is:  16.51681594376926
Stock with Google Trends Experiment  5  for MA : KAMA the RMSE is:  6.313085101789654
Stock with Google Trends Experiment  5  for MA : MIDPOINT the RMSE is:  15.741853822650901
Stock with Google Trends Experiment  5  for MA : T3 the RMSE is:  14.520507194247527
Stock with Google Trends Experiment  5  for MA : TEMA the RMSE is:  6.9148577552055706
Stock with Google Trends Experiment  5  for MA : SMA the MSE  is:  38.984836670221576
Stock with Google Trends Experiment  5  for MA : EMA the MSE  is:  124.25764226707467
Stock with Google Trends Experiment  5  for MA : WMA the MSE  is:  38.21405797999008
Stock with Google Trends Experiment  5  for MA : DEMA the MSE  is:  272.80520892035037
Stock with Google Trends Experiment  5  for MA : KAMA the MSE  is:  39.855043502438484
Stock with Google Trends Experiment  5  for MA : MIDPOINT the MSE  is:  247.8059617737088
Stock with Google Trends Experiment  5  for MA : T3 the MSE  is:  210.84512917819418
Stock with Google Trends Experiment  5  for MA : TEMA the MSE  is:  47.81525777472662
Stock with Google Trends Experiment  5  for MA : SMA the MAE is:  5.10393861237253
Stock with Google Trends Experiment  5  for MA : EMA the MAE is:  9.17724208981177
Stock with Google Trends Experiment  5  for MA : WMA the MAE is:  5.0592557753421294
Stock with Google Trends Experiment  5  for MA : DEMA the MAE is:  15.690440427295842
Stock with Google Trends Experiment  5  for MA : KAMA the MAE is:  4.932118299391016
Stock with Google Trends Experiment  5  for MA : MIDPOINT the MAE is:  13.137429929578502
Stock with Google Trends Experiment  5  for MA : T3 the MAE is:  11.877711162377306
Stock with Google Trends Experiment  5  for MA : TEMA the MAE is:  5.805355375153735
Stock with Google Trends Experiment  6  for MA : SMA the RMSE is:  8.465749247038342
Stock with Google Trends Experiment  6  for MA : EMA the RMSE is:  8.200433736869249
Stock with Google Trends Experiment  6  for MA : WMA the RMSE is:  7.6518222681115455
Stock with Google Trends Experiment  6  for MA : DEMA the RMSE is:  11.301717142307146
Stock with Google Trends Experiment  6  for MA : KAMA the RMSE is:  6.622346161227303
Stock with Google Trends Experiment  6  for MA : MIDPOINT the RMSE is:  8.212439536673035
Stock with Google Trends Experiment  6  for MA : T3 the RMSE is:  12.444836664136135
Stock with Google Trends Experiment  6  for MA : TEMA the RMSE is:  12.788753454085597
Stock with Google Trends Experiment  6  for MA : SMA the MSE  is:  71.66891031373025
Stock with Google Trends Experiment  6  for MA : EMA the MSE  is:  67.24711347278334
Stock with Google Trends Experiment  6  for MA : WMA the MSE  is:  58.550384022767716
Stock with Google Trends Experiment  6  for MA : DEMA the MSE  is:  127.72881036471918
Stock with Google Trends Experiment  6  for MA : KAMA the MSE  is:  43.855468679122
Stock with Google Trends Experiment  6  for MA : MIDPOINT the MSE  is:  67.44416314351042
Stock with Google Trends Experiment  6  for MA : T3 the MSE  is:  154.873959597027
Stock with Google Trends Experiment  6  for MA : TEMA the MSE  is:  163.5522149093863
Stock with Google Trends Experiment  6  for MA : SMA the MAE is:  6.880610177712922
Stock with Google Trends Experiment  6  for MA : EMA the MAE is:  6.781803215137433
Stock with Google Trends Experiment  6  for MA : WMA the MAE is:  6.1413991074844
Stock with Google Trends Experiment  6  for MA : DEMA the MAE is:  10.306940424406019
Stock with Google Trends Experiment  6  for MA : KAMA the MAE is:  5.4751276749367985
Stock with Google Trends Experiment  6  for MA : MIDPOINT the MAE is:  6.768235104271493
Stock with Google Trends Experiment  6  for MA : T3 the MAE is:  10.329006454112236
Stock with Google Trends Experiment  6  for MA : TEMA the MAE is:  11.455728191899736
Stock with Google Trends Experiment  7  for MA : SMA the RMSE is:  6.094968155756209
Stock with Google Trends Experiment  7  for MA : EMA the RMSE is:  5.619844718041815
Stock with Google Trends Experiment  7  for MA : WMA the RMSE is:  8.062320926992156
Stock with Google Trends Experiment  7  for MA : DEMA the RMSE is:  5.938771117553298
Stock with Google Trends Experiment  7  for MA : KAMA the RMSE is:  7.8899662119078915
Stock with Google Trends Experiment  7  for MA : MIDPOINT the RMSE is:  9.191045637399997
Stock with Google Trends Experiment  7  for MA : T3 the RMSE is:  9.623258025780078
Stock with Google Trends Experiment  7  for MA : TEMA the RMSE is:  7.087285014740061
Stock with Google Trends Experiment  7  for MA : SMA the MSE  is:  37.148636819682245
Stock with Google Trends Experiment  7  for MA : EMA the MSE  is:  31.582654654902484
Stock with Google Trends Experiment  7  for MA : WMA the MSE  is:  65.00101872981564
Stock with Google Trends Experiment  7  for MA : DEMA the MSE  is:  35.269002386685244
Stock with Google Trends Experiment  7  for MA : KAMA the MSE  is:  62.25156682504816
Stock with Google Trends Experiment  7  for MA : MIDPOINT the MSE  is:  84.47531990876952
Stock with Google Trends Experiment  7  for MA : T3 the MSE  is:  92.6070950307407
Stock with Google Trends Experiment  7  for MA : TEMA the MSE  is:  50.229608880159034
Stock with Google Trends Experiment  7  for MA : SMA the MAE is:  5.090179331518223
Stock with Google Trends Experiment  7  for MA : EMA the MAE is:  4.507182634072088
Stock with Google Trends Experiment  7  for MA : WMA the MAE is:  6.705711592581163
Stock with Google Trends Experiment  7  for MA : DEMA the MAE is:  4.62878838931535
Stock with Google Trends Experiment  7  for MA : KAMA the MAE is:  6.222956717810362
Stock with Google Trends Experiment  7  for MA : MIDPOINT the MAE is:  7.890202393641488
Stock with Google Trends Experiment  7  for MA : T3 the MAE is:  8.212466968891306
Stock with Google Trends Experiment  7  for MA : TEMA the MAE is:  6.2877236827531835
Stock with Google Trends Experiment  8  for MA : SMA the RMSE is:  5.252930118844897
Stock with Google Trends Experiment  8  for MA : EMA the RMSE is:  6.000020760659167
Stock with Google Trends Experiment  8  for MA : WMA the RMSE is:  6.4368136932393005
Stock with Google Trends Experiment  8  for MA : DEMA the RMSE is:  13.462733076108087
Stock with Google Trends Experiment  8  for MA : KAMA the RMSE is:  5.237683336335214
Stock with Google Trends Experiment  8  for MA : MIDPOINT the RMSE is:  4.170343047962663
Stock with Google Trends Experiment  8  for MA : T3 the RMSE is:  8.122440285240321
Stock with Google Trends Experiment  8  for MA : TEMA the RMSE is:  4.8765955661855545
Stock with Google Trends Experiment  8  for MA : SMA the MSE  is:  27.593274833467863
Stock with Google Trends Experiment  8  for MA : EMA the MSE  is:  36.00024912834101
Stock with Google Trends Experiment  8  for MA : WMA the MSE  is:  41.43257052147296
Stock with Google Trends Experiment  8  for MA : DEMA the MSE  is:  181.24518187853474
Stock with Google Trends Experiment  8  for MA : KAMA the MSE  is:  27.433326731723586
Stock with Google Trends Experiment  8  for MA : MIDPOINT the MSE  is:  17.391761137690516
Stock with Google Trends Experiment  8  for MA : T3 the MSE  is:  65.97403618729489
Stock with Google Trends Experiment  8  for MA : TEMA the MSE  is:  23.78118431614061
Stock with Google Trends Experiment  8  for MA : SMA the MAE is:  4.117405060238624
Stock with Google Trends Experiment  8  for MA : EMA the MAE is:  4.70426166066095
Stock with Google Trends Experiment  8  for MA : WMA the MAE is:  5.073793611210466
Stock with Google Trends Experiment  8  for MA : DEMA the MAE is:  12.35792879577932
Stock with Google Trends Experiment  8  for MA : KAMA the MAE is:  4.154161853007022
Stock with Google Trends Experiment  8  for MA : MIDPOINT the MAE is:  3.393694867236689
Stock with Google Trends Experiment  8  for MA : T3 the MAE is:  6.528054740759965
Stock with Google Trends Experiment  8  for MA : TEMA the MAE is:  4.3174203405459135

Create HTML

In [90]:
cd ..
/content/drive/.shortcut-targets-by-id/1IaGjVBlTspxI2CHSrxfYnaiYvsaG0pHs/Stock price prediction/Archana - LSTM Hybrid
In [ ]:
cd drive/MyDrive/Stock price prediction/Archana - LSTM Hybrid
In [93]:
%%shell
jupyter nbconvert --to html LSTM_Hybrid_using_TA_LIB_Google_Trends.ipynb
[NbConvertApp] Converting notebook LSTM_Hybrid_using_TA_LIB_Google_Trends.ipynb to html
[NbConvertApp] ERROR | Notebook JSON is invalid: Additional properties are not allowed (u'metadata' was unexpected)

Failed validating u'additionalProperties' in stream:

On instance[u'cells'][155][u'outputs'][0]:
{u'metadata': {u'tags': None},
 u'name': u'stdout',
 u'output_type': u'stream',
 u'text': u'SMA\nSMA([input_arrays], [timeperiod=30])\n\nSimple Moving Average (Overlap Studies)\n\nInputs:\n    price: (any ndarray)\nParameters:\n    timeperiod: 30\nOutputs:\n    real\n17\n\nWorking on SMA predictions\nparameters used :  808 269\nPerforming stepwise search to minimize aic\n ARIMA(1,3,1)(0,0,0)[0]             : AIC=-15000.708, Time=9.21 sec\n ARIMA(0,3,0)(0,0,0)[0]             : AIC=-13492.284, Time=2.24 sec\n ARIMA(1,3,0)(0,0,0)[0]             : AIC=-15827.971, Time=8.03 sec\n ARIMA(0,3,1)(0,0,0)[0]             : AIC=-13635.197, Time=10.27 sec\n ARIMA(2,3,0)(0,0,0)[0]             : AIC=-14132.778, Time=3.58 sec\n ARIMA(2,3,1)(0,0,0)[0]             : AIC=-15140.312, Time=9.97 sec\n ARIMA(1,3,0)(0,0,0)[0] intercept   : AIC=-13970.469, Time=7.12 sec\n\nBest model:  ARIMA(1,3,0)(0,0,0)[0]          \nTotal fit time: 50.431 seconds\n                               SARIMAX Results                                \n==============================================================================\nDep. Variable:                      y   No. Observations:                  808\nModel:               SARIMAX(1, 3, 0)   Log Likelihood                7936.985\nDate:                Sun, 12 Dec 2021   AIC                         -15827.971\nTime:                        15:53:06   BIC                         -15720.081\nSample:                             0   HQIC                        -15786.537\n                                - 808                                         \nCovariance Type:                  opg                                         \n==============================================================================\n                 coef    std err          z      P>|z|      [0.025      0.975]\n------------------------------------------------------------------------------\nx1         -4.786e-05      0.001     -0.066      0.947      -0.001       0.001\nx2         -4.789e-05      0.001     -0.085      0.932      -0.001       0.001\nx3         -4.819e-05      0.000     -0.105      0.917      -0.001       0.001\nx4             1.0000      0.001   1557.248      0.000       0.999       1.001\nx5         -4.579e-05      0.001     -0.071      0.943      -0.001       0.001\nx6          -5.16e-05      0.000     -0.432      0.666      -0.000       0.000\nx7         -4.778e-05      0.000     -0.278      0.781      -0.000       0.000\nx8            -0.0012      0.000     -7.403      0.000      -0.002      -0.001\nx9         -3.454e-06      0.002     -0.002      0.998      -0.003       0.003\nx10           -0.0005      0.001     -0.403      0.687      -0.003       0.002\nx11            0.0029      0.000     10.904      0.000       0.002       0.003\nx12           -0.0003      0.000     -1.815      0.069      -0.001    2.06e-05\nx13        -4.809e-05      0.000     -0.157      0.875      -0.001       0.001\nx14           -0.0001      0.000     -0.482      0.630      -0.001       0.000\nx15        -5.214e-05      0.000     -0.273      0.785      -0.000       0.000\nx16        -4.468e-05      0.000     -0.125      0.901      -0.001       0.001\nx17        -4.224e-05      0.000     -0.202      0.840      -0.000       0.000\nx18        -8.086e-05      0.000     -0.270      0.787      -0.001       0.001\nx19        -5.537e-05      0.000     -0.244      0.807      -0.000       0.000\nx20         8.423e-05      0.000      0.333      0.739      -0.000       0.001\nx21        -4.232e-05      0.000     -0.166      0.868      -0.001       0.000\nar.L1         -0.6666   6.03e-06  -1.11e+05      0.000      -0.667      -0.667\nsigma2      4.093e-10   8.97e-11      4.563      0.000    2.33e-10    5.85e-10\n===================================================================================\nLjung-Box (L1) (Q):                  60.24   Jarque-Bera (JB):           1334882.31\nProb(Q):                              0.00   Prob(JB):                         0.00\nHeteroskedasticity (H):               0.11   Skew:                            -3.81\nProb(H) (two-sided):                  0.00   Kurtosis:                       202.35\n===================================================================================\n\nWarnings:\n[1] Covariance matrix calculated using the outer product of gradients (complex-step).\n[2] Covariance matrix is singular or near-singular, with condition number 5.73e+20. Standard errors may be unstable.\nARIMA order: (1, 3, 0) \n\nEpoch 1/500\n\nEpoch 00001: val_loss improved from inf to 0.04321, saving model to LSTM8.h5\n48/48 - 3s - loss: 1.3590 - val_loss: 0.0432 - lr: 0.0010 - 3s/epoch - 69ms/step\nEpoch 2/500\n\nEpoch 00002: val_loss did not improve from 0.04321\n48/48 - 0s - loss: 1.2809 - val_loss: 0.0457 - lr: 0.0010 - 215ms/epoch - 4ms/step\nEpoch 3/500\n\nEpoch 00003: val_loss did not improve from 0.04321\n48/48 - 0s - loss: 1.2036 - val_loss: 0.0497 - lr: 0.0010 - 228ms/epoch - 5ms/step\nEpoch 4/500\n\nEpoch 00004: val_loss did not improve from 0.04321\n48/48 - 0s - loss: 1.1161 - val_loss: 0.0545 - lr: 0.0010 - 221ms/epoch - 5ms/step\nEpoch 5/500\n\nEpoch 00005: val_loss did not improve from 0.04321\n48/48 - 0s - loss: 1.0402 - val_loss: 0.0591 - lr: 0.0010 - 223ms/epoch - 5ms/step\nEpoch 6/500\n\nEpoch 00006: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.\n\nEpoch 00006: val_loss did not improve from 0.04321\n48/48 - 0s - loss: 0.9788 - val_loss: 0.0637 - lr: 0.0010 - 226ms/epoch - 5ms/step\nEpoch 7/500\n\nEpoch 00007: val_loss did not improve from 0.04321\n48/48 - 0s - loss: 0.9461 - val_loss: 0.0642 - lr: 1.0000e-04 - 228ms/epoch - 5ms/step\nEpoch 8/500\n\nEpoch 00008: val_loss did not improve from 0.04321\n48/48 - 0s - loss: 0.9414 - val_loss: 0.0646 - lr: 1.0000e-04 - 215ms/epoch - 4ms/step\nEpoch 9/500\n\nEpoch 00009: val_loss did not improve from 0.04321\n48/48 - 0s - loss: 0.9368 - val_loss: 0.0651 - lr: 1.0000e-04 - 227ms/epoch - 5ms/step\nEpoch 10/500\n\nEpoch 00010: val_loss did not improve from 0.04321\n48/48 - 0s - loss: 0.9323 - val_loss: 0.0656 - lr: 1.0000e-04 - 215ms/epoch - 4ms/step\nEpoch 11/500\n\nEpoch 00011: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.\n\nEpoch 00011: val_loss did not improve from 0.04321\n48/48 - 0s - loss: 0.9278 - val_loss: 0.0662 - lr: 1.0000e-04 - 218ms/epoch - 5ms/step\nEpoch 12/500\n\nEpoch 00012: val_loss did not improve from 0.04321\n48/48 - 0s - loss: 0.9250 - val_loss: 0.0662 - lr: 1.0000e-05 - 225ms/epoch - 5ms/step\nEpoch 13/500\n\nEpoch 00013: val_loss did not improve from 0.04321\n48/48 - 0s - loss: 0.9246 - val_loss: 0.0663 - lr: 1.0000e-05 - 223ms/epoch - 5ms/step\nEpoch 14/500\n\nEpoch 00014: val_loss did not improve from 0.04321\n48/48 - 0s - loss: 0.9242 - val_loss: 0.0663 - lr: 1.0000e-05 - 235ms/epoch - 5ms/step\nEpoch 15/500\n\nEpoch 00015: val_loss did not improve from 0.04321\n48/48 - 0s - loss: 0.9237 - val_loss: 0.0664 - lr: 1.0000e-05 - 219ms/epoch - 5ms/step\nEpoch 16/500\n\nEpoch 00016: ReduceLROnPlateau reducing learning rate to 1e-05.\n\nEpoch 00016: val_loss did not improve from 0.04321\n48/48 - 0s - loss: 0.9233 - val_loss: 0.0664 - lr: 1.0000e-05 - 234ms/epoch - 5ms/step\nEpoch 17/500\n\nEpoch 00017: val_loss did not improve from 0.04321\n48/48 - 0s - loss: 0.9228 - val_loss: 0.0665 - lr: 1.0000e-05 - 221ms/epoch - 5ms/step\nEpoch 18/500\n\nEpoch 00018: val_loss did not improve from 0.04321\n48/48 - 0s - loss: 0.9224 - val_loss: 0.0666 - lr: 1.0000e-05 - 227ms/epoch - 5ms/step\nEpoch 19/500\n\nEpoch 00019: val_loss did not improve from 0.04321\n48/48 - 0s - loss: 0.9219 - val_loss: 0.0666 - lr: 1.0000e-05 - 214ms/epoch - 4ms/step\nEpoch 20/500\n\nEpoch 00020: val_loss did not improve from 0.04321\n48/48 - 0s - loss: 0.9215 - val_loss: 0.0667 - lr: 1.0000e-05 - 220ms/epoch - 5ms/step\nEpoch 21/500\n\nEpoch 00021: val_loss did not improve from 0.04321\n48/48 - 0s - loss: 0.9210 - val_loss: 0.0668 - lr: 1.0000e-05 - 220ms/epoch - 5ms/step\nEpoch 22/500\n\nEpoch 00022: val_loss did not improve from 0.04321\n48/48 - 0s - loss: 0.9206 - val_loss: 0.0668 - lr: 1.0000e-05 - 224ms/epoch - 5ms/step\nEpoch 23/500\n\nEpoch 00023: val_loss did not improve from 0.04321\n48/48 - 0s - loss: 0.9201 - val_loss: 0.0669 - lr: 1.0000e-05 - 236ms/epoch - 5ms/step\nEpoch 24/500\n\nEpoch 00024: val_loss did not improve from 0.04321\n48/48 - 0s - loss: 0.9197 - val_loss: 0.0670 - lr: 1.0000e-05 - 229ms/epoch - 5ms/step\nEpoch 25/500\n\nEpoch 00025: val_loss did not improve from 0.04321\n48/48 - 0s - loss: 0.9192 - val_loss: 0.0670 - lr: 1.0000e-05 - 231ms/epoch - 5ms/step\nEpoch 26/500\n\nEpoch 00026: val_loss did not improve from 0.04321\n48/48 - 0s - loss: 0.9188 - val_loss: 0.0671 - lr: 1.0000e-05 - 215ms/epoch - 4ms/step\nEpoch 27/500\n\nEpoch 00027: val_loss did not improve from 0.04321\n48/48 - 0s - loss: 0.9183 - val_loss: 0.0672 - lr: 1.0000e-05 - 217ms/epoch - 5ms/step\nEpoch 28/500\n\nEpoch 00028: val_loss did not improve from 0.04321\n48/48 - 0s - loss: 0.9179 - val_loss: 0.0673 - lr: 1.0000e-05 - 220ms/epoch - 5ms/step\nEpoch 29/500\n\nEpoch 00029: val_loss did not improve from 0.04321\n48/48 - 0s - loss: 0.9174 - val_loss: 0.0673 - lr: 1.0000e-05 - 220ms/epoch - 5ms/step\nEpoch 30/500\n\nEpoch 00030: val_loss did not improve from 0.04321\n48/48 - 0s - loss: 0.9170 - val_loss: 0.0674 - lr: 1.0000e-05 - 214ms/epoch - 4ms/step\nEpoch 31/500\n\nEpoch 00031: val_loss did not improve from 0.04321\n48/48 - 0s - loss: 0.9165 - val_loss: 0.0675 - lr: 1.0000e-05 - 227ms/epoch - 5ms/step\nEpoch 32/500\n\nEpoch 00032: val_loss did not improve from 0.04321\n48/48 - 0s - loss: 0.9161 - val_loss: 0.0676 - lr: 1.0000e-05 - 219ms/epoch - 5ms/step\nEpoch 33/500\n\nEpoch 00033: val_loss did not improve from 0.04321\n48/48 - 0s - loss: 0.9156 - val_loss: 0.0676 - lr: 1.0000e-05 - 234ms/epoch - 5ms/step\nEpoch 34/500\n\nEpoch 00034: val_loss did not improve from 0.04321\n48/48 - 0s - loss: 0.9152 - val_loss: 0.0677 - lr: 1.0000e-05 - 219ms/epoch - 5ms/step\nEpoch 35/500\n\nEpoch 00035: val_loss did not improve from 0.04321\n48/48 - 0s - loss: 0.9147 - val_loss: 0.0678 - lr: 1.0000e-05 - 224ms/epoch - 5ms/step\nEpoch 36/500\n\nEpoch 00036: val_loss did not improve from 0.04321\n48/48 - 0s - loss: 0.9143 - val_loss: 0.0679 - lr: 1.0000e-05 - 224ms/epoch - 5ms/step\nEpoch 37/500\n\nEpoch 00037: val_loss did not improve from 0.04321\n48/48 - 0s - loss: 0.9138 - val_loss: 0.0679 - lr: 1.0000e-05 - 220ms/epoch - 5ms/step\nEpoch 38/500\n\nEpoch 00038: val_loss did not improve from 0.04321\n48/48 - 0s - loss: 0.9134 - val_loss: 0.0680 - lr: 1.0000e-05 - 216ms/epoch - 5ms/step\nEpoch 39/500\n\nEpoch 00039: val_loss did not improve from 0.04321\n48/48 - 0s - loss: 0.9129 - val_loss: 0.0681 - lr: 1.0000e-05 - 222ms/epoch - 5ms/step\nEpoch 40/500\n\nEpoch 00040: val_loss did not improve from 0.04321\n48/48 - 0s - loss: 0.9125 - val_loss: 0.0682 - lr: 1.0000e-05 - 214ms/epoch - 4ms/step\nEpoch 41/500\n\nEpoch 00041: val_loss did not improve from 0.04321\n48/48 - 0s - loss: 0.9120 - val_loss: 0.0683 - lr: 1.0000e-05 - 224ms/epoch - 5ms/step\nEpoch 42/500\n\nEpoch 00042: val_loss did not improve from 0.04321\n48/48 - 0s - loss: 0.9115 - val_loss: 0.0684 - lr: 1.0000e-05 - 241ms/epoch - 5ms/step\nEpoch 43/500\n\nEpoch 00043: val_loss did not improve from 0.04321\n48/48 - 0s - loss: 0.9111 - val_loss: 0.0684 - lr: 1.0000e-05 - 223ms/epoch - 5ms/step\nEpoch 44/500\n\nEpoch 00044: val_loss did not improve from 0.04321\n48/48 - 0s - loss: 0.9106 - val_loss: 0.0685 - lr: 1.0000e-05 - 221ms/epoch - 5ms/step\nEpoch 45/500\n\nEpoch 00045: val_loss did not improve from 0.04321\n48/48 - 0s - loss: 0.9102 - val_loss: 0.0686 - lr: 1.0000e-05 - 226ms/epoch - 5ms/step\nEpoch 46/500\n\nEpoch 00046: val_loss did not improve from 0.04321\n48/48 - 0s - loss: 0.9098 - val_loss: 0.0687 - lr: 1.0000e-05 - 220ms/epoch - 5ms/step\nEpoch 47/500\n\nEpoch 00047: val_loss did not improve from 0.04321\n48/48 - 0s - loss: 0.9093 - val_loss: 0.0688 - lr: 1.0000e-05 - 222ms/epoch - 5ms/step\nEpoch 48/500\n\nEpoch 00048: val_loss did not improve from 0.04321\n48/48 - 0s - loss: 0.9089 - val_loss: 0.0689 - lr: 1.0000e-05 - 220ms/epoch - 5ms/step\nEpoch 49/500\n\nEpoch 00049: val_loss did not improve from 0.04321\n48/48 - 0s - loss: 0.9084 - val_loss: 0.0690 - lr: 1.0000e-05 - 219ms/epoch - 5ms/step\nEpoch 50/500\n\nEpoch 00050: val_loss did not improve from 0.04321\n48/48 - 0s - loss: 0.9080 - val_loss: 0.0691 - lr: 1.0000e-05 - 218ms/epoch - 5ms/step\nEpoch 51/500\n\nEpoch 00051: val_loss did not improve from 0.04321\n48/48 - 0s - loss: 0.9075 - val_loss: 0.0691 - lr: 1.0000e-05 - 217ms/epoch - 5ms/step\nEpoch 00051: early stopping\n'}
[NbConvertApp] Writing 15784963 bytes to LSTM_Hybrid_using_TA_LIB_Google_Trends.html
Out[93]:

In [ ]: